Podcasts about Folding

  • 1,209PODCASTS
  • 1,628EPISODES
  • 42mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 23, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Folding

Latest podcast episodes about Folding

Rover's Morning Glory
MON PT 1: Rover thinks Duji is trying to make him look bad

Rover's Morning Glory

Play Episode Listen Later Feb 23, 2026 47:55


Rover thinks Duji is trying to make him look bad in front of Gianna. Folding or crumpling the toilet paper. Has JLR been pronouncing his name wrong? Is Rover upset that Duji is allegedly dating? Eric Dane is the latest celebrity to have a GoFundMe account set up.

Rover's Morning Glory
MON FULL SHOW: Rover thinks Duji is trying to make him look bad, Charlie is given home remedies to help his illness, and Duji denies having a secret boyfriend

Rover's Morning Glory

Play Episode Listen Later Feb 23, 2026 179:44


Rover thinks Duji is trying to make him look bad in front of Gianna. Folding or crumpling the toilet paper. Has JLR been pronouncing his name wrong? Is Rover upset that Duji is allegedly dating? Eric Dane is the latest celebrity to have a GoFundMe account set up. MSNOW reported that Kash Patel was at the Olympics to watch the men's hockey game. Americans are stuck in Mexico. The Mexican government said it killed the nation's most wanted cartel boss "El Mencho." JLR is wearing a new hoodie. During the BAFTA ceremony a man with Tourette's yells out a racial slur while Michael B. Jordan and Delroy Lindo presented an award. Duji gives Charlie home remedies to help his sickness. Eye drop prescription to help improve eyesight. Duji denies having a secret boyfriend. A woman shares her hotel hack for washing your underwear.

Rover's Morning Glory
MON PT 1: Rover thinks Duji is trying to make him look bad

Rover's Morning Glory

Play Episode Listen Later Feb 23, 2026 47:52 Transcription Available


Rover thinks Duji is trying to make him look bad in front of Gianna. Folding or crumpling the toilet paper. Has JLR been pronouncing his name wrong? Is Rover upset that Duji is allegedly dating? Eric Dane is the latest celebrity to have a GoFundMe account set up.See omnystudio.com/listener for privacy information.

Rover's Morning Glory
MON FULL SHOW: Rover thinks Duji is trying to make him look bad, Charlie is given home remedies to help his illness, and Duji denies having a secret boyfriend

Rover's Morning Glory

Play Episode Listen Later Feb 23, 2026 178:21 Transcription Available


Rover thinks Duji is trying to make him look bad in front of Gianna. Folding or crumpling the toilet paper. Has JLR been pronouncing his name wrong? Is Rover upset that Duji is allegedly dating? Eric Dane is the latest celebrity to have a GoFundMe account set up. MSNOW reported that Kash Patel was at the Olympics to watch the men's hockey game. Americans are stuck in Mexico. The Mexican government said it killed the nation's most wanted cartel boss "El Mencho." JLR is wearing a new hoodie. During the BAFTA ceremony a man with Tourette's yells out a racial slur while Michael B. Jordan and Delroy Lindo presented an award. Duji gives Charlie home remedies to help his sickness. Eye drop prescription to help improve eyesight. Duji denies having a secret boyfriend. A woman shares her hotel hack for washing your underwear. See omnystudio.com/listener for privacy information.

Served with Andy Roddick
Kim's Racket Smashing, Victoria Mboko in the Top 10, & More | Love All w/ Kim Clijsters

Served with Andy Roddick

Play Episode Listen Later Feb 18, 2026 50:57


Join 4-time Grand Slam Champion Kim Clijsters and tennis reporter Blair Henley as they break down the latest headlines, from Ben Shelton's title run in Dallas to the pressure of following up a Grand Slam win when there's barely time to celebrate. The duo dives into the behind the scenes of the Nexo Dallas Open, Victoria Mboko breaking the top 10, Karolína Muchová's tactical brilliance and coaching change, Maria Sakkari's technical adjustments, and the controversy surrounding Iga Świątek and Aryna Sabalenka's withdrawals from Dubai. Welcome to Love All! If you want to hang out with us behind the scenes follow us on all of our socials: https://www.instagram.com/loveallpodcast/ https://www.tiktok.com/@loveallpodcast https://x.com/loveallpodcast  ⏰ TIMESTAMPS:  0:00 Welcome to Love All  1:37 Kim's Achilles rehab update  3:14 The truth about Kim smashing rackets  6:03 Folding laundry as a US Open Champion  7:21 Learning hard lessons about focus at 18  10:16 Behind the scenes at the Dallas Open  14:27 The character of Ben Shelton off-court  17:45 Karolina Muchova tactical masterclass  21:03 Muchova's Belgian rehab and new coaching  23:14 Victoria Mboko breaks into the Top 10 at 19  25:38 The technical shift in Maria Sakkari's game  27:53 The controversy surrounding Dubai withdrawals  34:05 Walkovers piling up in Dubai 35:12 Destanee Aiava and the toxic side of tennis  39:40 Federer and Carillo Hall of Fame sellout  43:23 Rec Room  56:33 Closing thoughts and wrap-up Learn more about your ad choices. Visit megaphone.fm/adchoices

Just Tap In with Emilio Ortiz
#270 Peter Crone – The Hidden Crisis Within: Unlocking the Truth Behind Human Freedom

Just Tap In with Emilio Ortiz

Play Episode Listen Later Feb 18, 2026 79:15


In this revealing conversation, Peter Crone—The Mind Architect—joins Emilio Ortiz to explore the psychological patterns shaping humanity's current reality and the global freedom crisis emerging as we enter 2026. The discussion uncovers how unconscious identity, belief systems, trauma conditioning, and the human psyche govern our sense of freedom, choice, and personal power. Rather than external threats, Peter reveals that the real crisis is internal—rooted in how the mind creates illusion, fear, separation, and limitation. This conversation is essential for those seeking clarity on consciousness, emotional healing, and human freedom.✦ Learn more about The Deep Dive Membership | https://iamemilioortiz.com/the-deep-dive/Together, we uncover what most don't see: how liberation begins with awareness, how the mind can imprison and free us, and why 2026 marks a pivotal psychological and spiritual turning point for humanity. Peter offers rare insights into breaking mental conditioning, dissolving false identities, and reclaiming inner sovereignty in a rapidly changing world. This interview will challenge your assumptions and shift how you perceive reality.Peter Crone is a globally recognized mindset coach, speaker, and thought leader. Known as The Mind Architect, he works with elite athletes, entrepreneurs, creatives, and world-class performers to help them dissolve limiting beliefs, heal emotional conditioning, and access lasting freedom. His work bridges psychology, consciousness, and lived experience, guiding people to uncover the unconscious patterns shaping their reality so they can live with clarity, purpose, and inner sovereignty.PODCAST CHAPTERS00:00 – Peter Crone Intro5:32 – Why More People Are Seeking “Freedom” Right Now7:27 – Do We Really Have Free Will? Literal vs Figurative Choice9:11 – Soul Alignment vs Random Life Choices12:29 – Folding the Mind Back on Itself15:29 – Identity, Time, Space & Who We Really Are19:25 – The Evolution of Humanity & The “New Human”22:38 – The Untapped Potential of Human Consciousness26:29 – Peter's Recent Personal Breakthrough: Commitment & Relationships29:40 – Are Humans Always “Programmed”? Can We Ever Be Free?31:10 – How to Raise a Truly Free Child (Beyond Limiting Beliefs)34:21 – Understanding the Reality of People Who Cause Harm39:09 – How to Forgive Yourself After an “Unforgivable” Act42:57 – The Root of Human Suffering: A Malfunction in Thinking45:55 – Why We Fall in Love with Who We Become in Relationships51:24 – Peter's First Major Heartbreak56:57 – Thriving vs Collapsing in Times of Uncertainty59:37 – The “Death of Identity” & How Often It Happens1:04:08 – What Karma Really Is (Beyond Good & Bad)1:06:06 – Seeing the Preciousness in Every Human Being1:08:47 – The Ultimate Purpose of the Human Experience1:11:32 – “I Am Freedom Itself” — Stages of Spiritual Evolution1:12:30 – Peter's Work, Socials & Transformational Mastermind1:12:52 – Final Trio #1: What Softened Peter's Nervous System1:14:00 – Final Trio #2: Returning to Inner Peace in Chaotic Times1:15:33 – Final Trio #3 (Time Capsule Question): Message for Future LeadersGuests: Peter Crone, The Mind Architect✦ Website | https://www.petercrone.com/✦ Instagram | https://www.instagram.com/petercrone/✦ The Mastermind | https://www.petercrone.com/mastermind✦ Peter's Courses & Membership | https://www.petercrone.com/programsHost: Emilio Ortiz✦ IG | https://www.instagram.com/iamemilioortiz/✦ Subscribe to Channel | https://www.youtube.com/EmilioOrtiz© 2026 Emilio Ortiz. All rights reserved. Content from Just Tap In Podcast is protected under copyright law.Legal Disclaimer: The views, thoughts, and opinions expressed by guests on Just Tap In are solely those of the guest and do not necessarily reflect the views or opinions of Emilio Ortiz or the Just Tap In Podcast. All content is for informational purposes only and should not be considered professional advice.

NPC: Next Portable Console
Folding, Flipping, and a Handheld Heavyweight

NPC: Next Portable Console

Play Episode Listen Later Feb 17, 2026 30:55


This week, Brendon, Federico, and John cover a raft of new budget handhelds a new OLED tablet from Lenovo, TrimUI leaks, and more Also available on YouTube here. Links and Show Notes The Latest Portable Gaming News News Recap MagicX Two Dream Teased In New Images Anbernic RG Vita is an Android handheld game console with a familiar design Lenovo's next gaming tablet is coming soon (in China) AYANEO Corner AYANEO NEXT 2 handheld gaming PC with AMD Strix Halo will cost $1799 to $3499 at launch (up to $4299 retail) Just look at Ayaneo's absolute unit of a Windows gaming "handheld" AYANEO launches KONKR FIT handheld gaming PC with Ryzen AI 9 HX 370/470 for $999 and up Mangmi Pocket MAX now available: Android handheld with modular controllers Trimui 4.7-inch flip leak TrimUI SP-Style Handheld Also Leaks Tomb Raider Reboot Launches on iOS and Android - MacRumors Subscribe to NPC XL NPC XL is a weekly members-only version of NPC with extra content, available exclusively through our new Patreon for $5/month. Each week on NPC XL, Federico, Brendon, and John record a special segment or deep dive about a particular topic that is released alongside the "regular" NPC episodes. You can subscribe here: https://www.patreon.com/c/NextPortableConsole Leave Feedback for John, Federico, and Brendon NPC Feedback Form Credits Show Art: Brendon Bigley Music: Will LaPorte Follow Us Online On the Web MacStories.net Wavelengths.online Follow us on Mastodon NPC Federico John Brendon Follow us on Bluesky NPC MacStories Federico Viticci John Voorhees Brendon Bigley Affiliate Linking Policy

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

The Laundromat Resource Podcast
238. Customers BEG Him to Do Their Laundry with Randy Roberts

The Laundromat Resource Podcast

Play Episode Listen Later Feb 11, 2026 96:15


Send a textWelcome back to the Laundromat Resource Podcast! In this episode, host Jordan Berry sits down with Randy Roberts, a powerhouse in the world of laundry pickup and delivery who has built a thriving commercial laundry business in a market where most would have doubted success. From running large sales organizations in corporate America to coaching his son's sports teams, Randy Roberts shares how his background paved the way for his rapid rise in the laundry industry—even after being advised not to pursue commercial clients.In this conversation, you'll hear Randy Roberts walk us through how he and his cousin started with two laundromats and quickly scaled up their pickup and delivery business by focusing on high-value commercial accounts, outpacing giants like Cintas along the way. Randy Roberts discusses everything from sales strategies, the importance of SEO, and building strong industry partnerships, to investing in advanced equipment like the Foltex folding machine.Whether you're an industry veteran or just curious about the laundry business, this episode is packed with actionable insights on scaling commercial laundry services, the value of mentorship, and the mindset required to turn challenges into opportunities. Get ready for an instant classic with concrete advice for operators at any stage—and a reminder that action, learning, and building the right relationships are the real keys to success.In this episode, Jordan and Randy discuss:00:00 "Randy Roberts' Laundry Success"08:35 "Choosing Retirement and New Paths"11:44 Laundromats and Revenue Growth Journey19:34 "Default to Action Mindset"25:24 "The Rich Don't Work Money"28:03 Breaking Laundry Business Rules35:00 Scaling Success in Delivery Business37:30 "Building Trust for Success"43:39 "Standing Firm on Pricing"51:17 Confident Pricing Wins Clients56:27 "Firing Clients to Save Time"01:03:23 "Struggles with Chemical Supply Vendors"01:07:40 "Revolutionizing Efficiency in Folding"01:13:56 "Efficient Laundry Operations Insight"01:17:09 Ego-Free Growth in Business01:24:41 "Sharing Price Secures Business"01:28:01 "Active Stewardship Builds Trust"01:32:28 "Missing the Connection Opportunity"01:36:14 "Path to Harmony"

Progress, Potential, and Possibilities
Proteus Folding: Turning Protein Structure Into Real Drug Decisions - Dr. Stephanie Linker, Ph.D. And Dr. Philipp Schnee, Ph.D. - Merck KGaA, Darmstadt, Germany

Progress, Potential, and Possibilities

Play Episode Listen Later Feb 11, 2026 51:57


Send a textOver the past few years, artificial intelligence has rapidly entered drug discovery — but one of the true “holy grail” challenges inside pharma is no longer just predicting what proteins look like, but understanding how molecules actually interact: how proteins bind drugs, antibodies, RNA, and each other, and how those insights can guide better decisions long before anything reaches the lab.Early breakthroughs in structure prediction made protein models widely accessible, but real biology happens at interfaces, in motion, and often in fleeting conformations that determine whether a therapy ultimately succeeds or fails. Today's conversation explores what it means to move into this next chapter — where structural predictions are translated into actionable insight for real-world drug development.Joining us are two scientists from Merck KGaA, Darmstadt, Germany ( https://www.emdgroup.com/en ), working at the intersection of protein structure prediction, molecular dynamics, and generative design, helping to build internal platforms that turn computational models into practical decision tools for therapeutic discovery.Dr. Stephanie Linker, Ph.D. is a Senior Computational Biochemist in Merck's Group Digital Innovation unit, where she leads initiatives in generative antibody design, de novo protein binder development, and advanced structure prediction platforms. Her work focuses on how molecular shape, flexibility, and dynamics influence whether a designed molecule actually performs in biological systems.Dr. Philipp Schnee, Ph.D. is a Computational Protein Design expert at Merck KGaA, currently part of the GoGlobal Data & AI rotation program. His research bridges high-resolution molecular dynamics simulations with experimental biochemistry to understand protein function, mutation effects, and mechanisms that can be leveraged for enzyme engineering and inhibitor design.Together, their work reflects a broader shift happening across the pharmaceutical industry — away from static structures and standalone models, and toward integrated platforms that combine folding, binding, ranking, and experimental validation to guide smarter, faster therapeutic decisions.In this episode, we explore what these next-generation tools can do today, where their limitations remain, and why the ability to move from structure prediction to decision-ready insight may become one of the most important frontiers in modern drug discovery.AI drug discovery, protein structure prediction, computational biology, biologics design, pharmaceutical R&D#DrugDiscovery #ArtificialIntelligence #AlphaFold #ProteinFolding#Biotech #PharmaInnovation #ComputationalBiology #StructuralBiology #AIinHealthcare #AntibodyEngineering #MolecularDynamics #FutureOfMedicine #SystemsBiology #LifeSciences #ProgressPotentialPossibilities #MachineLearning #BioTechPodcastSupport the show

Ba'al Busters Broadcast
Alpha Male Universe Recall Pt 2 Folding and Begging Forgiveness

Ba'al Busters Broadcast

Play Episode Listen Later Feb 8, 2026 223:20


If you're not following my YouTube and Rumble channels, you're weaker than Clavicular's cheek bones after a spazmodic bonesmashing fit. I've never paid attention to any of these people, but it's sure to enter my radar when they begin their direct work to destroy the psyche of mankind for their Ghoulie handlers.Many thanks for the channel campaign help. We're still a ways away from the goal. See the links below to help get the stuff we need. Thank You!Use Code BB5 here: https://SemperFryLLC.comClick Picture on the Right for the AZURE WELL products and use code BB5 for your discount.Find clickable portals to Dr Monzo and Dr Glidden on Dan's site, and it's the home of the best hot sauce, his book, and Clean Source Creatine-HCL.Join Dr. Glidden's Membership site here:https://leavebigpharmabehind.com/?via=pgndhealth⁠Code: baalbusters for 25% OFFMake Dr. Glidden Your DoctorPods & Exclusives AD-FREE!https://patreon.com/c/KristosCastIndependent Channel Depends On YOU. Contribute Below:https://buymeacoffee.com/BaalBustershttps://paypal.me/BaalBustershttps://GiveSendGo.com/BaalBustersFollow these Below:Twitter Account: https://x.com/KristosCasthttps://open.spotify.com/show/0vtEmTteIzD2nB5bdQ8qDRWant Dan's book or his Award winning hot sauces and spicy honey?Go here: https://SemperFryLLC.comBooks and Documentaries You Should Own: https://www.bannedbyamazon.com/Use Code: BBDan for 10% OffFind clickable portals to Dr Monzo and Dr Glidden on Dan's site.Subscribe to the NEW dedicated channel for Dr Glidden's Health Solutions Show https://rumble.com/c/DrGliddenHealthShowBecome a supporter of this podcast: https://www.spreaker.com/podcast/ba-al-busters-broadcast--5100262/support.

The Stupid History Minute
The Folding Chair

The Stupid History Minute

Play Episode Listen Later Feb 7, 2026 1:18 Transcription Available


The Stupid History of The Folding ChairBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-stupid-history-minute--4965707/support.

The Relentless Diaries
Put Her In The Green

The Relentless Diaries

Play Episode Listen Later Feb 4, 2026 161:13


Welcome to episode #202 of The Relentless Diaries!On this episode we are joined by Alicia Ace West where we discuss a variety of topics including motherhood, Black history month, the Epstein files, 2026 Grammy Awards and much much more!Chapters:0:01 - Introducing our guest Alicia ‘Ace' West/ The Wild journey of becoming a mother/ surrogacy ethics 33:30 2026 Grammy Awards/ Black history month 1:06:30 - Khaby Lame sold his likeness for a billion?! 1:10:20 - Epstein files are being released 1:42:05 - Digital footprints are not being taken serious 1:54:20 - Folding clothes for men 2:26:00 - #DearRelentless Justin Timberlake VS Justin Bieber Hosted on Acast. See acast.com/privacy for more information.

The Bitcoin Cash Podcast
#172: Fun(d) Tokens & BChat Beta feat. Miguel

The Bitcoin Cash Podcast

Play Episode Listen Later Feb 3, 2026 128:44


Software engineer and BCH BLAZE Builder Miguel discusses how his project is developing in the BCH-1 program, his other BCH endeavours, and the beta release of BChat for Selene wallet. Loved the chat? Tell us in the comments. What's your take on BChat? Oh, and guess what? This podcast just hit 5 YEARS OLD! If you're loving the BCH journey, drop a LIKE to wish us a happy anniversary - means the world!

The Jesse Kelly Show
Hour 1: Hungry Hungry Hippos

The Jesse Kelly Show

Play Episode Listen Later Jan 30, 2026 35:50 Transcription Available


They aren’t trying to be right or tell the truth. How is “turning the temperature down” working? Why would these billionaires who made their fortunes under capitalism, fund communism? Catering to the dregs of society to stay in power. Folding under The Left’s propaganda. Follow The Jesse Kelly Show on YouTube: https://www.youtube.com/@TheJesseKellyShowSee omnystudio.com/listener for privacy information.

The Creative Pulse podcast
Ep 139: Beck Rivera - Book Folding

The Creative Pulse podcast

Play Episode Listen Later Jan 28, 2026 34:54


Beck Rivera makes folded book art, folding paper in such a precise way that an image emerges from within a book's pages. Each book is hand made and completely unique.He's created books that showcase images such as the Mona Lisa, William Shakespeare, a night sky over mountains, a bee, a whale, a tennis racquet and ball, chess pieces, a sailboat, and more. He also makes custom designs, patterns, tutorials and does demonstrations related to the art. On this episode, host Angela de Burger chats with Beck about what intrigued him about the world of book folding, the balance of technical and creative skills he uses, and how he develops the design for each folded book he makes.  Say hi to Beck:  Website: becksbooks.co  Instagram - @becks.books  TikTok - @becksbooks----Creative Pulse Podcast socials:  Instagram: @creativepulsepodcastMusic credit: https://www.purple-planet.com 

The Knife Junkie Podcast
Most Loved Folding Knives of 2025: The Knife Junkie Podcast (Episode 652)

The Knife Junkie Podcast

Play Episode Listen Later Jan 21, 2026


Bob DeMarco returns with another year-end roundup on The Knife Junkie Podcast, counting down his favorite folding knives from 2025. After covering his top fixed blades last week, this episode focuses on the folders that earned the most pocket time throughout the year. From tactical self-defense designs to classic patterns with sentimental value, this list covers the full range of what makes a great folding knife.The episode begins with community feedback and a pocket check featuring the Cuda Maxx 5.5, Jack Wolf Knives Timber Jack, Brock Blades Magni XL, and Work Tuff Gear Steadfast L. Bob also discusses the classic Case Trapper. He shares his experience finally putting his MoraKniv carbon steel fixed blade to work. The Knife Life News segment covers new releases from Sencut, the return of the Bareknuckle, and the Stealth Fighter-inspired Zero Tolerance ZT0117.The main event features folders that stood out in 2025. The list includes the Cold Steel Rajah 3, Kansept Bison, JW Kollab Tango, DC Blades Sting, Buck Range Elite, North Mountain Blade BBMN, Kansept Deadite, Manganas Steel Aurelia, a vintage Buck 112 Ranger with serious history, and the Cold Steel Mayhem. Each knife earned its spot through real-world carry and use, not just initial impressions.Bob shares honest thoughts on what worked and what surprised him throughout the year. Some knives delivered exactly what he expected, while others exceeded all predictions. The Manganas Aurelia claimed top honors as his favorite folder of the year, while the vintage Buck 112 from his friend Mike carried emotional weight that goes beyond materials and design. Whether you prefer tactical folders, classic patterns, or modern designs, this list offers something worth considering.Watch the full video to see all these knives in action and hear detailed discussions about blade grinds, materials, and what makes each folder special. This episode demonstrates that the most reliable knife reviews come from actual use over time, providing viewers with the information they need to make informed decisions about their next purchase.Find the list of all the knives shown in the show and links to the Knife Life news stories at https://theknifejunkie.com/652. Support the Knife Junkie channel with your next knife purchase. Find our affiliate links at https://theknifejunkie.com/knives. You can also support The Knife Junkie and get in on the perks of being a patron, including early access to the podcast and exclusive bonus content. Visit https://www.theknifejunkie.com/patreon for details. Let us know what you thought about this episode and leave a rating and/or a review. Your feedback is appreciated. You can also email theknifejunkie@gmail.com with any comments, feedback, or suggestions. To watch or listen to past episodes of the podcast, visit https://theknifejunkie.com/listen. And for professional podcast hosting, use our podcast platform of choice: https://theknifejunkie.com/podhost.

Work On Your Game: Discipline, Confidence & Mental Toughness For Sports, Business & Life | Mental Health & Mindset

In this episode, I break down the three ways we respond to pressure: fold, fight, or flow. Every challenge puts you at a fork in the road, even if you don't notice it. The choice you make is not random, it shows your conditioning, your preparation, and your level of control. I explain what each response really means and how it plays out under pressure. As you listen, you can check yourself and see which one you default to. Show Notes: [01:45]#1 Folding is what we call a collapsing response. [07:04]#2 Fighting is an ego reaction.  [11:14]#3 Flowing is your regulated response.  [14:42]#4 Your default mood becomes your reputation.   [16:46]#5 Power is about choosing your response, not winning in the moment. [17:54] Recap Next Steps: --- Power Presence is not taught. It is enforced. If you are operating in environments where hesitation costs money, authority, or leverage, the Power Presence Mastermind exists as a controlled setting for discipline, execution, and consequence-based decision-making. Details live here: http://PowerPresenceProtocol.com/Mastermind  This Masterclass is the public record of standards. Private enforcement happens elsewhere. All episodes and the complete archive: → WorkOnYourGamePodcast.com 

The CultCast
Guest episode: Would you put a sticker on a folding iPhone? (Cult of Mac Podcast #2)

The CultCast

Play Episode Listen Later Jan 14, 2026 69:15


Send us a text!The new Cult of Mac podcast:Apple PodcastsSpotify PodcastsOvercastCastroRSS feedAmazon MusicPocket CastsPodcast AddictDeezerPlayer FMPodchaserListen NotesCastboxGoodpodsMetacastThis week's stories:Samsung teases the foldable iPhone's biggest upgradeAt CES 2026, Samsung showcased a crease-less folding OLED panel that might be headed for Apple's first foldable iPhone.Decorating your iPhone with tiny stickers is 2026's hot new trendPersonalize your iPhone 17 Pro camera plateau with stickers. Lots of people are doing it in fun and creative ways.This is the weirdest auction of Steve Jobs memorabilia we've ever seenSome rather strange Steve Jobs memorabilia is up for auction, showcasing unique items related to the Apple co-founder.New fitness app Reps & Sets 26 gets serious about strength trainingDesigned exclusively for iPhone, iPad and Apple Watch, strength-training app Reps & Sets 26 was developed by a long-time Cult of Mac writer.Reps & Sets 26 on the App StoreTop 10 Apple setups of 2025Cult of Mac's top 10 Apple setups of 2025 excel in raw computing power, aesthetic beauty, functional innovation — or all three.Turn your M4 Mac mini into a small-but-mighty ‘Mac Pro'

Katharsis / Processed
Katharsis / Processed - Episode January 11, 2026

Katharsis / Processed

Play Episode Listen Later Jan 12, 2026


(Kevin) - some January bluesPlaylist: Siobhan Wilson - There Are No SaintsÓlöf Arnalds - Úfinn sjórKirin McElwain - Softer, StillM83 - Spinning FuryOld Saw - Ribbons of MarbleChris Eckman - Borrowed TuneJoan Shelley & Nathan Salsburg - Little WingAlio Die - The Ascension of Endless EnchantmentThe Horse - Broken DoorRosanna, Princess of Kerry - Not In My Carverity den - push down hard / tess IIStephen Vitiello & Taylor Deupree - iiRafael Anton Irisarri - Empire Systems (Kevin Richard Martin Rework - Frozen MixJohn Swanke, featuring Sam Baribault on harmonium - CloudburstPaperbark - Frailest BeginningsMats Erlandsson & Yair Elazar Glotman - On the Folding of LeavesDavid Shea - A SutraNC Lawlor - Laurie RoseAlan Sparhawk, featuring Hollis Sparhawk - Not BrokenDenison Witmer - ConfessionsGrant-Lee Phillips - SomeoneMichael Scott Dawson - WinteringJane Siberry, featuring K.D. Lang - Calling All Angels

In the News
227: CES Craziness, Folding Up the iPhone

In the News

Play Episode Listen Later Jan 9, 2026 65:04 Transcription Available


Send us a textWatch the video!https://youtu.be/ImXXGCUIfmAIn the News blog post for January 9, 2026https://www.iphonejd.com/iphone_jd/2026/01/in-the-news811.html00:00 It's 2026!

The Comics Canon
Episode 248: Drome

The Comics Canon

Play Episode Listen Later Jan 7, 2026 89:18


On this episode, we welcome writer, actor and friend of the show Michael Owl, host of The Origin Story Podcast, as we dive into one of the best-reviewed graphic novels of 2025: Drome by Jesse Lonergan! This imaginative epic combines a visually dazzling creation myth with an action-packed fantasy adventure, as the blue-skinned demigoddess Blue attempts to impose order on a chaotic world, opposed by the violent one-eyed warlord Patch, each acting as proxies for remote gods. Folding in elements from such sources as the Bible, Greek mythology and classic fantasy, it feels both familiar and aggressively new as Lonergan plays with the comic-book format in fun and creative ways. But is all of that enough for Drome to survive the judgment of that pantheon of deities and demigods known as … The Comics Canon? In This Episode: ·       Legend of the Guardians: The Owls of Ga'Hoole ·       Chekhov's Night Bull ·       Star Trek The Next Generation: Darmok ·       Santa, When the Claus Fell ·       There There by Tommy Orange ·       Slugfest: Inside the Epic 50-Year Battle Between Marvel and DC, by Reed Tucker Subscribe to Michael's Substack Join us in two weeks as we ramp up to our special 250xth episode with a look at The Earth Stories (issues #28-36) of Scott McCloud's Zot! Until then:Please consider donating to the Comic Book Legal Defense Fund Impress your friends with our Comics Canon merchandise! Rate us on Apple Podcasts! Send us an email! Hit us up on Facebook or Bluesky! And as always, thanks for listening!  

FPCSANANTONIO PODCAST
Children's Stories with Ms. Rebecca: Folding Chairs

FPCSANANTONIO PODCAST

Play Episode Listen Later Dec 8, 2025 3:28


Children's Stories with Ms. Rebecca: Folding Chairs by First Presbyterian Church San Antonio

Mo News
Trump Backs Hegseth Over Strike; Shingles Vax May Slow Dementia; Triple-Folding Smartphone; Kids Want Crypto For Christmas

Mo News

Play Episode Listen Later Dec 3, 2025 38:49


Headlines: – Welcome To Mo News (02:00) – Michael and Susan Dell Donate $6.25 Billion To Fund ‘Trump Accounts' For American Kids (02:30) – Trump Backs Hegseth As Strike Fallout Grows (07:20) – Former Honduras President Juan Orlando Hernández Freed After Trump Pardon (11:30) – Some Friendly Questions At First Briefing For New Pentagon Press Corps (16:20) – Witkoff and Kushner Met Putin For Five Hours on Ukraine Plan (22:50) – Shingles Vaccine May Actually Slow Down Dementia, Study Finds (26:00) – Samsung's Next Salvo Against Apple: A Triple-Folding Smartphone (28:10) – Kids Who Have Smartphones By Age 12 Have Higher Risk Of Depression, Obesity~ Study (30:45) – Did Your Kid Ask For Crypto in Their Stocking? They're Not Alone (33:15) – Apple's Answer To Spotify's Wrapped: Replay (36:15) – On This Day In History (37:30) Thanks To Our Sponsors:  – LMNT⁠ - Free Sample Pack with any LMNT drink mix purchase –⁠ Industrious⁠ - Coworking office. 50% off day pass | Promo Code: MONEWS50 – Incogni - 60% off an annual plan| Promo Code: MONEWS – ⁠Boll & Branch⁠ – 25% off, plus free shipping | Code: MONEWS – Aura Frames -  $35 off best-selling Carver Mat frames | Promo Code: MONEWS

The Other Side of Midnight with Frank Morano
Hour 1: Don't Look Under The Folding Chair | 12-01-25

The Other Side of Midnight with Frank Morano

Play Episode Listen Later Dec 1, 2025 51:59


Lionel, the "mahatma of social media" and " panjandrum of propaganda", leads the charge to pull back the veil and expose what's behind the curtain. Lionel explores the deliberately orchestrated promotion of nudism (or "body liberation") and nude cruises, revealing why he finds the activity "hideous" and "grotesque". Finally, hear how Big Tech wields power through propaganda (including Mockingbird and menticide/brainwashing) and illusory legislation that sounds great but says "nothing". Learn more about your ad choices. Visit megaphone.fm/adchoices

UBC News World
Best Folding Treadmill Models: Why Home Fitness Experts Favor These Features

UBC News World

Play Episode Listen Later Nov 24, 2025 8:52


Discover why folding treadmills have become the go-to choice for home fitness enthusiasts. Learn about the surprising performance capabilities, space-saving advantages, and key features that separate quality models from the rest.Visit https://www.soletreadmills.com/products/sole-f63 SOLE Fitness City: Salt Lake City Address: 56 Exchange Pl. Website: https://www.soletreadmills.com/

Mega
Folding the Fitted Sheet with Jacquis Neal

Mega

Play Episode Listen Later Nov 23, 2025 26:49


Keece Caprice (⁠Jacquis Neal (Dropout)⁠ has a fitted sheet for anyone slain in the spirit.

Daily Kos Radio - Kagro in the Morning
Kagro in the Morning - November 17, 2025

Daily Kos Radio - Kagro in the Morning

Play Episode Listen Later Nov 17, 2025 116:44


David Waldman and Greg Dworkin didn't just fall off the turnip truck, have been around the block and happen to be the sharpest knives in the drawer on today's KITM. Folding like a card table, America's greatest power bottom, Donald K. Trump ordered Republicans to demand that the Democrats finally release the Epstein files. Now, Republicans are set to decisively reveal the whole truth about… what is "barely" legal, anyhow? Lamest of ducks, Trump wasn't all that to begin with, and is decaying beyond the hope of bronzers, Latinos, or even bros. Living hand to mouth doesn't leave much for heat and electricity as inflation is becoming everyone's biggest concern, and affordability is this administration's least. Drop a penny in Marjorie Traitor Greene's slot and she sings a different tune, but really, it's all the same song and dance. That is, until she drops a dime on daddy. The gun is on the table with Venezuela, as we ready to drink their milkshake. Emil Bove explains that we are at actually at war with boats. People and cargo are only collateral damage. Coming apart at the seams, the bottom is falling out of the DOJ as some see the light.

Retro Radio Podcast
Lum and Abner – The Folding Cot. 420903

Retro Radio Podcast

Play Episode Listen Later Nov 14, 2025 14:45


Back on a regular routine in the store, Abner gets the chance to go on a hunting trip. Lum won't let him go, since he's trying to get the books…

Big Fatty Online
BFO4634 – They Set Up Some Folding Chairs

Big Fatty Online

Play Episode Listen Later Nov 13, 2025 20:01


The Fat One is back with a recap of his day in Fat Acres which included more contest entries, supper at Page's Okra Grill, the 20th Anniversarium of the Dancin', cheese cookies and the coupon! Happy National Indian Pudding Day.

Virtual Economy
Episode 200: Amazon Primed for Gaming Exit (News Show) - Part Two

Virtual Economy

Play Episode Listen Later Nov 11, 2025 77:31


We're joined by two special guests on this milestone episode. Please welcome Andy Pan (the only person who has previously co-hosted a show with us) and Greg Alderton, Virtual Economy's lead moderator! With 14,000 people laid off, studios shuttered, and projects abandoned, Amazon seems ready to give up on most or all of its gaming dreams. Also: EA earnings ride off into the sunset, Capcom posts a strong quarter, Remedy's in trouble, and Microsoft is inscrutable. Also: Sony responds to Tencent's request to dismiss the Light of Moritram lawsuit, Circana report on U.S. video game spending for September 2025, and Microsoft is a little too cozy with the U.S. government.  You can support Virtual Economy's growth via our Ko-Fi and also purchase Virtual Economy merchandise! TIME STAMPS [00:01:12] - Investment Interlude/Divestment Dinterlude [00:11:01] - Quick Hits [00:24:08] - Labor Report [01:06:11] - FAFO Award SOURCES INVESTMENT INTERLUDE NetEase divests Fantastic Pixel Castle as studio hunts for new publisher to avoid closure | GamesIndustry Neon Doctrine is Closing, Folding into Raw Fury | Iain Garner on LinkedIn Neon Doctrine is Closing, Folding into Raw Fury | Vlad Tsypljak on LinkedIn Warner Bros. Discovery Initiates Review of Potential Alternatives to Maximize Shareholder Value | Warner Bros. Discovery LABOR REPORT Amazon Makes 'Significant' Cuts In Video-Game Division | Bloomberg (Paywall) Ubisoft RedLynx Proposes Strategic Refocus on Small Screens | Redlynx Ubisoft Massive is Restructuring | Massive Open Letter to the Executive Leadership at Build A Rocket Boy | IWDB GameWorkers Grand Theft Auto made him a legend. His latest game was a disaster | BBC Netflix reportedly shutters studio behind Squid Game mobile spinoff | Engadget Rovio is laying off 36 staff, says Angry Birds Dream Blast has 'not been performing as expected' | VGC Testronic shuts down another studio, this time in Bucharest | Game Developer Report: Metroid Dread developer MercurySteamworks accused of retaliation and forced crunch | Game Developer "Si algo hace bien MercurySteam es jugar con el miedo porque no hay otro lugar donde ir": Así se ha degradado el ambiente laboral en uno de los estudios de videojuegos más grandes de España | 3DJuegos Krafton declares transformation into an "AI-first" company, investing over KRW 100 billion. | Kraton Heart Machine making layoffs and ending development on Hyper Light Breaker | Game Developer Outerloop Layoffs | Chandana Ekanayake on Bluesky 'Grand Theft Auto' Studio Accused of Union Busting After Firings | Bloomberg (Paywall) Battle.net Workers Unionize As Microsoft Neutrality Agreement Expires | Aftermath

Vince Coakley Podcast
The Vince Coakley Radio Program | Late night reaction to Dem's folding on shutdown | 50 year mortgage

Vince Coakley Podcast

Play Episode Listen Later Nov 11, 2025 58:48 Transcription Available


Vince dives into the late night reaction to the Democrats folding on ACA subsidies they were fighting to keep during the shutdown. Plus, President Trump tosses out the idea of a 50 year mortgage. That and much more on The Vince Coakley Radio Program. See omnystudio.com/listener for privacy information.

Hashkafa of the Moadim
Parshas Vayeira: Folding the Priya

Hashkafa of the Moadim

Play Episode Listen Later Nov 7, 2025 11:48


A Small Voice: Conversations With Photographers

Paul Sng is a bi-racial British Chinese filmmaker based in Edinburgh, Scotland whose work focuses on people who challenge the status quo. He has directed six feature documentaries, including Poly Styrene: I Am A Cliche (winner of BIFA 2021 Best Documentary, BIFA 2021 Raindance Discovery Award), Tish (Sheffield DocFest 2023 Opening Gala film) and Reality Is Not Enough (Edinburgh International Film Festival 2025 Closing Gala film). Paul strives to make bold and creatively ambitious films that connect emotionally with audiences, working collaboratively with great teams to tell stories about outsiders and amplify rebellious voices. In 2022 he was named as a BAFTA Breakthrough Artist and directed Folding, his first short drama film, funded by Screen Scotland and BFI Network.In episode 268, Paul discusses, among other things:Growing up in London with a single mumHaving outsider syndrome… and imposter syndrome, and using that to your advantageHis educational history, including a couple of false startsMaking a feature as his first ever film with the ‘confidence of ignorance'The importance of finding a good Producer (and what their job involves)The important questions he asks himself in considering whether to make a filmStructure and working with an editorApplying the same narrative principles to documentary as are prevalent in fictionThe creative treatment of actualityFinding an audienceCurrently in production, Little WarriorReferenced:TrainspottingSymposium, PlatoBruce LeeJackie ChanDavid YipJohn WooWong Kar-WaiColin McArthurSleaford ModsNathan HannawinBruce RobinsonOrson WellesRebecca Mark-LawsonJennifer CorcoranMoonage DaydreamThe Atrocity Exhibition, JG BallardThe Man In The White SuitEmma ButtWebsite | IMDB page | InstagramEpisode sponsor:Aftershoot. Your complete AI workflow: Streamline photo culling, editing, and retouching so you can create stunning images, grow your business, and save 18+ hours every month. Try it completely free for a 30 day trial and get a 15% discount at checkout once you sign up with the code SMALLVOICEPOD. Become a A Small Voice podcast member here to access exclusive additional subscriber-only content and the full archive of 200+ previous episodes for £5 per month.Subscribe to my weekly newsletter here for everything A Small Voice related and much more besides.Follow me on Instagram here.Build Yourself a Squarespace Website video course here.

The Knife Junkie Podcast
12 Great Full-Titanium Folding Knives: The Knife Junkie Podcast (Episode 638)

The Knife Junkie Podcast

Play Episode Listen Later Nov 5, 2025 Transcription Available


Bob DeMarco takes you through his top 12 full-titanium folding knives in this week's episode of The Knife Junkie Podcast. From affordable production pieces to custom collaborations, these knives showcase why titanium has become the premium material of choice for serious collectors and everyday carriers.The episode covers designs that range from tactical to refined, including the Greek-made Manganas Steel Aurelia, the accessible Kansept Bison, and the show-stopping Asymmetrical Nighthorse. Bob shares his personal experiences carrying and using each knife, explaining what makes them special beyond just their materials and construction.Highlights include the J.W. Kollab Tango that Bob carried all summer, the art deco-inspired American Blade Works Model 2, and the Spartan Harsey Folder with his personal logo etched into the titanium. Each knife demonstrates how titanium construction creates folders that feel solid without wearing you down during everyday carry.Bob also covers his pocket check featuring the Cold Steel 4Max Scout with custom cocobolo scales, updates from Knife Life News, including new releases from Kizer and We Knife Co., and additions to his collection like the Artisan Cutlery Revel and Malice Co. Scorpion King. The First Tool segment features the Benchmade Bali-Song and its place in knife history.Whether you own titanium folders already or are considering your first one, this episode gives you solid information on what is available and what to look for. Bob explains why these 12 designs stand out and how titanium ages beautifully while maintaining strength and functionality through real-world use.Find the list of all the knives shown in the show and links to the Knife Life news stories at https://theknifejunkie.com/638.Support the Knife Junkie channel with your next knife purchase. Find our affiliate links at https://theknifejunkie.com/knives. You can also support The Knife Junkie and get in on the perks of being a patron, including early access to the podcast and exclusive bonus content. Visit https://www.theknifejunkie.com/patreon for details.Let us know what you thought about this episode, and leave a rating and a review. Your feedback is appreciated. You can also email theknifejunkie@gmail.com with any comments, feedback, or suggestions.To watch or listen to past episodes of the podcast, visit https://theknifejunkie.com/listen. And for professional podcast hosting, use our podcast platform of choice: https://theknifejunkie.com/podhost.

Edge & Flow Podcast
Folding Knife Development

Edge & Flow Podcast

Play Episode Listen Later Nov 4, 2025 87:58


TJ and Lucas discussed TJ's journey in knife manufacturing, including his transition from fixed blades to folders and the challenges of prototype production and scaling operations. They explored strategies for diversifying TJ's business while maintaining his existing product line, with Lucas sharing insights on balancing growth and production efficiency. The conversation concluded with discussions about shop setup, including equipment purchases and layout changes, while touching on industry trends and market conditions in the knife industry.

Our Kids Our Schools
Who Runs This Town? Local Government & the Art of Folding in the Cheese

Our Kids Our Schools

Play Episode Listen Later Nov 3, 2025 70:13


Send us a textMost people can name the President—but not their local city council members. In this episode of The Purple Zone, Alexis sits down with Matt Todd, host of The Ranch Podcast, to unpack what local government actually does and why it matters. From who really decides where roads go (hint: it's not your Eagle City Council members) to how citizens can get involved at every level, this episode gets you started with Local Gov 101.Matt shares insights from his conversations with elected officials, candidates, and civic leaders—offering real Idaho examples that bring policy to life. Along the way,  a parallel to that unforgettable Schitt's Creek scene: when Moira tells David to “fold in the cheese,” but never explains how. That's how local government can feel...knowing it's there, but not knowing how to describe who does what and how.Whether you're a newcomer or a lifelong Idahoan, this conversation breaks down structures, responsibilities, and the small steps we can all take to keep democracy strong from the bottom up. Because nothing changes unless we change it.

The John Batchelor Show
38: Power Struggle Over NASA and the Moon Race Guests: Douglas Messier, David Livingston Douglas Messier discusses a power struggle over NASA, including acting administrator Shawn Duffy's interest in folding NASA into the Department of Transportation and

The John Batchelor Show

Play Episode Listen Later Oct 31, 2025 10:45


Power Struggle Over NASA and the Moon Race Guests: Douglas Messier, David Livingston Douglas Messier discusses a power struggle over NASA, including acting administrator Shawn Duffy's interest in folding NASA into the Department of Transportation and his concern that the United States might lose the Moon Race 2.0 to China. Duffy has challenged SpaceX's contract for the Artemis 3 moon landing, aiming to accelerate lander development amid fears that dependence on the complex Starship/Superheavy architecture might delay the mission beyond 2029.

Throughline
Throughline Dances

Throughline

Play Episode Listen Later Oct 29, 2025 29:51


Stuck in traffic? Glued to your desk chair? Folding yet another pile of your kids' laundry? We GOT you!! Take a break, turn up the volume, and shake it out with this special episode of Throughline, a tribute to dance music, all songs composed by our very own Ramtin Arablouei.To access bonus episodes and listen to Throughline sponsor-free, subscribe to Throughline+ via Apple Podcasts or at plus.npr.org/throughline.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

RNZ: Our Changing World
SAR4SaR - The folding, floating search and rescue device

RNZ: Our Changing World

Play Episode Listen Later Oct 27, 2025 26:33


New Zealand's marine search and rescue region stretches from Antarctica to north of Samoa. If someone goes missing without any means of communication, that's a lot of ocean to search. Now researchers and the New Zealand Defence Force have teamed up to develop and test a low-tech, no-battery device that can be picked up by radar – including that beamed down by satellites orbiting Earth. Sign up to the Our Changing World monthly newsletter for episode backstories, science analysis and more.In this episode:01:30 At Mission Bay Beach Dr Tom Dowling demonstrates the device03:40 In the University of Auckland's Space Institute lab the team explain the device design, and how it works.10:00 Dr Tom Dowling talks about the radar reflector trials in Campbell Island and Omaha beach13:00 Dr David Galligan, director of Defence Science and Technology on why DST is interested in the device19:00 The satellites are the second side of the equation. Dr Tom Dowling explains how that works.20:50 Back at Mission Bay Beach Dr Tom Dowling explains how the radar reflector would be an additional part of a kit on a boat and how it would work to narrow down the search area…Go to this episode on rnz.co.nz for more details

Data in Biotech
The Future of Co-Folding and Federated Learning with Apheris

Data in Biotech

Play Episode Listen Later Oct 22, 2025 49:08


Robin Rohm, CEO and Co-Founder of Apheris, joins Ross Katz to explore how federated learning is unlocking secure, cross-company collaboration in pharma. Discover how Apheris is enabling biopharma leaders to train cutting-edge co-folding models without sharing sensitive data, why AlphaFold 3 wasn't enough, and what OpenFold 3 means for the future of AI in drug discovery. ​​What You'll Learn in This Episode >> Why federated learning is a game-changer for pharma data sharing and AI-driven research >> How OpenFold 3 builds on AlphaFold's legacy to solve the protein-ligand interaction challenge >> The role of structural benchmarking in model development and validation >> How Apheris enables privacy-preserving collaboration between major pharma players >>The importance of high-quality, proprietary datasets in advancing co-folding model accuracy Meet Our Guest Robin Rohm is the CEO and Co-Founder of Apheris, a leader in federated data networks for life sciences. With a background in mathematics and computational genomics, Robin is advancing secure AI collaboration in pharma and biotech. About The Host Ross Katz is Principal and Data Science Lead at CorrDyn. Ross specializes in building intelligent data systems that empower biotech and healthcare organizations to extract insights and drive innovation. Connect with Our Guest: Sponsor: CorrDyn, a data consultancyConnect with Robin Roehm  on LinkedIn  Connect with Us: Follow the podcast for more insightful discussions on the latest in biotech and data science.Subscribe and leave a review if you enjoyed this episode!Connect with Ross Katz on LinkedIn Sponsored by… This episode is brought to you by CorrDyn, the leader in data-driven solutions for biotech and healthcare. Discover how CorrDyn is helping organizations turn data into breakthroughs at CorrDyn.

Roz & Mocha
1339 - Splitting the Bill, Folding Your Pizza & Banana Etiquette!

Roz & Mocha

Play Episode Listen Later Oct 21, 2025 21:11


What everyday skills from before 2000 are useless now? Should you split the bill or pay it all when dining out with a friend? Is folding your pizza the right move—or is crust-first the way to go? Plus, morning rituals, banana etiquette, and which brand tagline best describes your personality. It's all in this episode of Ask Roz & Mocha.

The Leftover Pieces; Suicide Loss Conversations
Healing After Suicide: Meaning That Doesn't Erase Pain—Service • Art • Legacy

The Leftover Pieces; Suicide Loss Conversations

Play Episode Listen Later Oct 20, 2025 5:52


Meaning is a companion, not a cure; a small act of service or creation makes room for both love and ache in grief after suicide. Journal prompt: “A value I still trust—and one 10-minute way to live it…”What we mean by “meaning that doesn't erase pain” (so we're clear): Meaning isn't a cure or a performance. It's a small, honest act that lets love move alongside ache. Examples:Service: sending a resource to someone struggling; leaving water/snacks for tomorrow-you; holding a door on a hard day.Art: four lines of writing, a quick sketch, a photo of something true—not pretty, true.Legacy: speaking their name, lighting a candle before dinner, adding one memory to your witness log. Keep it 10 minutes or less, tied to a value you still trust (kindness, truth, presence, creativity, service).A Flicker (Hope) — A purposeful minute Folding one kindness into the day can warm the edges. Keep the warmth.To Rebuild (Healing) — Pick one lane (≤10 minutes) Service: share a resource, hold a door, donate $1, check on a neighbor. Art: write four lines, sketch one object, snap a photo that feels true. Legacy: speak their name, light a candle, note one memory in your “witness log.”Take a Step (Becoming) — Name why it matters Finish the sentence: “This act honors [value/them/me] because [reason].” Say it out loud; then do the act.Choose-your-energy menu:Hollow (low): Hollow — Light a candle or speak their name once.Healing (medium): Healing — Do one 10-minute act in service, art, or legacy.Becoming (higher): Becoming — Schedule this act 2–3 times this week.Food for Thought Today: Meaning isn't a verdict that you're “better.” It's a humble way to carry what hurts while letting your love move somewhere tangible. The act is small on purpose; the point is movement, not proof.Exhale. Keep what serves you; leave the rest. I'll be here again tomorrow.

The Muckrake Political Podcast
Yeah, This Ain't Workin': Dems Keep Folding to Authoritarians

The Muckrake Political Podcast

Play Episode Listen Later Sep 30, 2025 47:53


Co-hosts Jared Yates Sexton and Nick Hauselman unpack a jaw-dropping Trump admission—governing off TV images and staff spin—and how that fantasy politics becomes policy: talk of sending “full force” into blue cities, a rebranding of domestic terror to target dissent, and an FBI focus that ignores the deadliest threats. They contrast right-wing violence with the media's left-is-rising narrative, dig into how grifters launder fear into power, and roast Democratic leadership for meeting authoritarian tactics with mushy pressers as a shutdown looms. Plus: Portland reality vs. propaganda, why “security” is the fig leaf for crackdowns, and what real pushback should look like. Support the show by signing up to our Patreon and get access to the full Weekender episode each Friday as well as special Live Shows and access to our community discord: http://patreon.com/muckrakepodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

Yahoo Sports NFL Podcast
Folding or doubling down on preseason takes: take poker! Trevor Lawrence, Ravens D, Panthers to playoffs? | Football 301

Yahoo Sports NFL Podcast

Play Episode Listen Later Sep 23, 2025 71:07


Nate Tice & Matt Harmon look back at some of their hottest preseason takes and decide whether to fold or double down after 3 weeks of NFL action. The duo start by playing take poker with the Baltimore Ravens and their leaky defense after falling to 1-2 on Monday night. Matt eats crow on the Buffalo Bills having a top-ten defense, while Nate boldly stands pat on Trevor Lawrence being a top-nine quarterback.Later, Nate & Matt hit on the Las Vegas Raiders, Denver Broncos & Nate's wild Indianapolis Colts take before wrapping up with the Carolina Panthers, Tetairoa McMillan for OROY and the Seattle Seahawks winning the NFC West.(2:45) - MNF recap: Are Ravens still Super Bowl contenders?(14:15) - Can the Bills still have a top-ten defense?(20:55) - Is Trevor Lawrence a top-nine QB?(32:10) - Raiders under 7.5 wins?(41:15) - Can Broncos still have NFL's top defense?(45:20) - Nate's preseason Daniel Jones take(56:50) - Can Panthers win the NFC South? + Tetairoa McMillan for OROY(1:03:30) - Seahawks winning the NFC West?Subscribe to Football 301 on your favorite podcast app:

Trivia With Budds
11 Trivia Questions on All About Animals

Trivia With Budds

Play Episode Listen Later Sep 15, 2025 6:50


See what you know about animals! LOVE TRIVIA WITH BUDDS? CHECK OUT THE MNEMONIC MEMORY PODCAST!  “Grow your brain one leaf at a time—tune in to The Mnemonic Memory Podcast.” http://www.themnemonictreepodcast.com/ Fact of the Day: In 2009, Ken Basin became the first contestant on the U.S. version of Who Wants to Be a Millionaire to miss the million-dollar question. He debated what he would regret more: walking away with $500K and being right or answering it and being wrong. He risked it, lost $475K, and left with $25K. Triple Connections: Egg, Folding, Arm THE FIRST TRIVIA QUESTION STARTS AT 01:31 SUPPORT THE SHOW MONTHLY, LISTEN AD-FREE FOR JUST $1 A MONTH: www.Patreon.com/TriviaWithBudds INSTANT DOWNLOAD DIGITAL TRIVIA GAMES ON ETSY, GRAB ONE NOW!  GET A CUSTOM EPISODE FOR YOUR LOVED ONES:  Email ryanbudds@gmail.com Theme song by www.soundcloud.com/Frawsty Bed Music:  "EDM Detection Mode" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ http://TriviaWithBudds.com http://Facebook.com/TriviaWithBudds http://Instagram.com/ryanbudds Book a party, corporate event, or fundraiser anytime by emailing ryanbudds@gmail.com or use the contact form here: https://www.triviawithbudds.com/contact SPECIAL THANKS TO ALL MY AMAZING PATREON SUBSCRIBERS INCLUDING:   Mollie Dominic Vernon Heagy Brian Clough Nathalie Avelar Becky and Joe Heiman Natasha raina Waqas Ali leslie gerhardt Skilletbrew Bringeka Brooks Martin Yves Bouyssounouse Sam Diane White Youngblood Evan Lemons Trophy Husband Trivia Rye Josloff Lynnette Keel Nathan Stenstrom Lillian Campbell Jerry Loven Ansley Bennett Gee Jamie Greig Jeremy Yoder Adam Jacoby rondell Adam Suzan Chelsea Walker Tiffany Poplin Bill Bavar Sarah Dan  Katelyn Turner Keiva Brannigan Keith Martin Sue First Steve Hoeker Jessica Allen Michael Anthony White Lauren Glassman Brian Williams Henry Wagner Brett Livaudais Linda Elswick Carter A. Fourqurean KC Khoury Tonya Charles  Justly Maya Brandon Lavin Kathy McHale Chuck Nealen Courtney French Nikki Long Mark Zarate Laura Palmer  JT Dean Bratton Kristy Erin Burgess Chris Arneson Trenton Sullivan Jen and Nic Michele Lindemann Ben Stitzel Michael Redman Timothy Heavner Jeff Foust Richard Lefdal Myles Bagby Jenna Leatherman Albert Thomas Kimberly Brown Tracy Oldaker Sara Zimmerman Madeleine Garvey Jenni Yetter JohnB Patrick Leahy Dillon Enderby James Brown Christy Shipley Alexander Calder Ricky Carney Paul McLaughlin Casey OConnor Willy Powell Robert Casey Rich Hyjack Matthew Frost Brian Salyer Greg Bristow Megan Donnelly Jim Fields Mo Martinez Luke Mckay Simon Time Feana Nevel

Armstrong & Getty One More Thing
It's National Folding Your Laundry Day! 

Armstrong & Getty One More Thing

Play Episode Listen Later Sep 3, 2025 12:50 Transcription Available


Jack brings us a list of national holidays, which seemed like the start of really good time... See omnystudio.com/listener for privacy information.

Under The Hood show
My Jeep Is Folding My Kids Up Like a Taco

Under The Hood show

Play Episode Listen Later Sep 3, 2025 48:14


23 Wagoneer Bad Batteries 17 Ram Diesel Scored Cylinders 20n Jeep Gladiator 6.4 Oil Type 92 Lesabre power steering leak 07 Accord tire balance 67 Galaxy 500 vapor lock 09 Silverado 2500 HD No crank 92 Chevy truck rear shaft leak 13 CR-V brake fluid type

Long Range Pursuit
EP 196: Endex By Gunwerks

Long Range Pursuit

Play Episode Listen Later Aug 20, 2025 57:40


Gunwerks just dropped a game-changer: the Endex rifle system.