Podcasts about models

  • 10,005PODCASTS
  • 21,344EPISODES
  • 40mAVG DURATION
  • 3DAILY NEW EPISODES
  • Feb 14, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about models

    Show all podcasts related to models

    Latest podcast episodes about models

    The John Batchelor Show
    S8 Ep456: Bob Zimmerman of Behind the Black contrasts SpaceX's routine success with ULA's technical struggles, attributing the booming private space sector and massive investments to a shift toward capitalist models.

    The John Batchelor Show

    Play Episode Listen Later Feb 14, 2026 11:54


    Bob Zimmerman of Behind the Black contrasts SpaceX's routine success with ULA's technical struggles, attributing the booming private space sector and massive investments to a shift toward capitalist models.

    BackTable Podcast
    Ep. 616 Exploring Unique Outpatient Models in Interventional Radiology with Dr. Richard Daniels

    BackTable Podcast

    Play Episode Listen Later Feb 13, 2026 49:06


    How can patients receive more consistent interventional radiology care amid a national shortage of IR physicians? That question led Dr. Rick Daniels to develop a new outpatient practice model centered on recruiting independent IRs to provide long-term, fractional coverage for groups in need. In this episode of the BackTable Podcast hosted by Dr. Aaron Fritts, Dr. Daniels outlines the thinking behind this approach and how it aims to expand access to IR services in outpatient settings. --- SYNPOSIS The conversation examines the evolving landscape of IR practice, including the challenges associated with transitioning between practice settings and building sustainable outpatient service lines. Dr. Daniels walks through the development of his model, with particular attention to identifying and supporting outpatient embolization opportunities. The discussion also explores the consortium-style structure for independent IRs, emphasizing long-term alignment, professional autonomy, and scalability at a national level. Operational considerations such as technology partnerships, documentation workflows, and targeted marketing strategies offer a practical look at what it takes to make this model work. --- TIMESTAMPS 00:00 - Introduction03:49 - Evolution of an Independent IR Practice05:30 - Challenges and Opportunities in Outpatient IR09:58 - Building Service Lines and Marketing Strategies18:34 - Forming a National IR Group25:21 - Balancing Business and Healthcare25:37 - Evaluating and Correcting Site Performance28:16 - Expanding Geographical Reach30:45 - Recruitment and Retention Challenges38:07 - The Importance of Tech-Doc Teams42:35 - Future Goals and Recruitment Efforts45:58 - Conclusion

    The Argument
    Anthropic's Chief on A.I.: ‘We Don't Know if the Models Are Conscious'

    The Argument

    Play Episode Listen Later Feb 12, 2026 62:22


    A.I. is evolving fast, and humanity is falling behind. Dario Amodei, the chief executive of Anthropic, has warned about the potential benefits — and real dangers — linked to the speed of that progress. As one of the lords of this technology, is he on the side of the human race?01:37 - The promise and optimism of A.I.12:59 - White collar "bloodbaths"25:09 - Robotics and physical labor30:16 - The first “dangerous” scenario42:22 - What if it goes rogue?48:01 - Claude's constitution(A full transcript of this episode is available on the Times website.)Thoughts? Email us at interestingtimes@nytimes.com. Please subscribe to our YouTube Channel, Interesting Times with Ross Douthat. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    The Exclusive With Sharon Tharp
    234: Survivor 50: Rizo Velovic Models His Game After Dan Gheesling

    The Exclusive With Sharon Tharp

    Play Episode Listen Later Feb 12, 2026 18:42


    In this preseason interview filmed in Fiji, Rizo Velovic arrives for Survivor 50 as a "super duper fan" ready to cement his legacy among the greats. Fresh off a 10-day turnaround from Season 49, Rizo reveals that he models his strategic approach after Big Brother legend Dan Gheesling. He also breaks down his "RizGod" persona, his "surreal" first impressions of seeing Coach at Ponderosa, and why he believes his status as an unknown commodity gives him a "mortgage-level" advantage over the legends. 

    Geekshow Podcast
    Geekshow Helpdesk: Medical Science is in Danger

    Geekshow Podcast

    Play Episode Listen Later Feb 12, 2026 60:45


    Tony: -Carbonation Station: Lando Reviews Lando!   -More Autonomous Lies: https://www.techspot.com/news/111233-waymo-admits-autopilot-often-guys-philippines.html   -Trump Phone situation continues to amaze: https://www.engadget.com/mobile/smartphones/trump-mobiles-t1-phone-is-apparently-still-coming-but-itll-be-uglier-and-more-expensive-190626835.html   -Another possibly awesome BT option: https://www.audioreviews.org/noble-announces-sceptre/   -The FDA continues to circle the drain: https://arstechnica.com/health/2026/02/fda-refuses-to-review-modernas-mrna-flu-vaccine/   -Snapmaker U1 has arrived!   Jarron:  -Tesla is canceling Model S and Model X https://www.cnn.com/2026/01/28/business/tesla-q4-2025-earnings   -Sodium Ion batteries incoming: The World's First Sodium-Ion Battery in Commercial EVs - Great at Low Temperatures   -Biohacking to get ahead at the Olympics: https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulking-your-budge/   -Rivian R2 shown off https://www.theverge.com/transportation/876441/the-early-reviews-of-the-rivian-r2-are-starting-to-roll-in   Owen: -Discord cracking down https://www.theverge.com/tech/875309/discord-age-verification-global-roll-out -Discord Clarification: https://discord.com/press-releases/discord-launches-teen-by-default-settings-globally Lando:   -AI Ads https://techcrunch.com/2026/02/09/chatgpt-rolls-out-ads/   -Fart Tech https://gizmodo.com/how-many-times-do-you-fart-a-day-smart-underwear-says-its-way-more-than-you-think-2000719805

    Learning Bayesian Statistics
    151 Diffusion Models in Python, a Live Demo with Jonas Arruda

    Learning Bayesian Statistics

    Play Episode Listen Later Feb 12, 2026 95:43


    • Support & get perks!• Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com• Intro to Bayes and Advanced Regression courses (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !Chapters:00:00 Exploring Generative AI and Scientific Modeling10:27 Understanding Simulation-Based Inference (SBI) and Its Applications15:59 Diffusion Models in Simulation-Based Inference19:22 Live Coding Session: Implementing Baseflow for SBI34:39 Analyzing Results and Diagnostics in Simulation-Based Inference46:18 Hierarchical Models and Amortized Bayesian Inference48:14 Understanding Simulation-Based Inference (SBI) and Its Importance49:14 Diving into Diffusion Models: Basics and Mechanisms50:38 Forward and Backward Processes in Diffusion Models53:03 Learning the Score: Training Diffusion Models54:57 Inference with Diffusion Models: The Reverse Process57:36 Exploring Variants: Flow Matching and Consistency Models01:01:43 Benchmarking Different Models for Simulation-Based Inference01:06:41 Hierarchical Models and Their Applications in Inference01:14:25 Intervening in the Inference Process: Adding Constraints01:25:35 Summary of Key Concepts and Future DirectionsThank you to my Patrons for making this episode possible!Links from the show:- Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026!- Jonas's Diffusion for SBI Tutorial & Review (Paper & Code)- The BayesFlow Library- Jonas on LinkedIn- Jonas on GitHub- Further reading for more mathematical details: Holderrieth & Erives- 150 Fast Bayesian Deep Learning, with David Rügamer, Emanuel Sommer & Jakob Robnik- 107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt

    Boys Club
    Ep: 223 - The Messy Olympics, Gigi Claudid our AI agent, Tatum Hunter on Internet Culture, Mashal Waqar and Artem Brazhnikov from Octant on sustainable funding models, Nick Devor of Barrons on Kalshi, Polymarket and the Super Bowl

    Boys Club

    Play Episode Listen Later Feb 12, 2026 91:19


    00:00 Introduction to Boys Club Live 00:44 The viral Vogue clip 03:46 Market Talk 07:13 Shoutout to Octant  11:29 AI Etiquette and Social Contracts 15:19 Gigi Claudid: Training our AI agent 20:49 Norwegian Athlete's Emotional Confession 23:34 Unpacking Relationship Drama 24:44 Messy Olympics: Scandals in Sports 25:32 Partner Shoutout: Anchorage Digital 27:27 Podcast Recommendation: The Rest is History 29:40 Interview with Tatum Hunter: Internet Culture Insights 30:06 Deepfakes and AI Ethics 38:43 Personal Surveillance and Trust Issues 48:52 TikTok's Mental Health Rabbit Hole 52:16 Shill Minute: Best Cookie in Crown Heights 53:08 Introduction to Octant: Innovating Funding Models 54:52 Funding Ethereum: Grants and Sustainability 56:50 Octant V2: Revolutionizing Community Funding 58:43 Sustainable Growth and the Future of Ethereum 01:05:56 The Intersection of Venture Capital and Sustainable Funding 01:11:25 Guest Nick Devor of Barrons on Prediction Markets 01:12:50 Gambling and Insider Trading in Prediction Markets 01:23:01 CFTC Challenges and the Future of Regulation 01:26:11 Free Groceries: A Marketing Strategy 01:29:50 Conclusion and Final Thoughts  

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

    Zärtliche Cousinen
    Helau im Swingerclub!

    Zärtliche Cousinen

    Play Episode Listen Later Feb 12, 2026 14:25


    Dieses Wochenende heißt es wieder: Rein in die Kostüme und rumgeknutscht. Karneval, Fasching, Fasnacht und so weiter, stehen vor der Tür und alles ist erlaubt.Atze ist ja bekennender Optimist, aber die Meldung, dass die Deutschen weniger Schnaps inhalieren, hat ihn doch schockiert. Gut, dass der Karneval auch das Hochamt der saufenden Zunft ist. Gute Tradition ist auch, dass nach der Dschungel Show, Heidi Klum neue Models sucht und nur ein Rülpser später beginnt ja auch schon let's Dance bei RTL. The Circle of Life.Instagram:https://www.instagram.com/atzeschroeder_offiziell?utm_source=ig_web_button_share_sheet&igsh=ZDNlZDc0MzIxNw== Hosted on Acast. See acast.com/privacy for more information.

    Twins Pod
    So Called "Plus Sized Models" Have Dropped The Beauties Standards All the Way To The Floor!

    Twins Pod

    Play Episode Listen Later Feb 11, 2026 10:35


    Watch The Full Episode https://youtu.be/5xmHnIlF13cBecome a Member and Give Us Some DAMN GOOD Support :https://www.youtube.com/channel/UCX8lCshQmMN0dUc0JmQYDdg/joinGet your Twins merch and have a chance to win our Damn Good Giveaways! - https://officialhodgetwins.com/Get Optimal Human, your all in one daily nutritional supplement - https://optimalhuman.com/Want to be a guest on the Twins Pod? Contact us at bookings@twinspod.comDownload Free Twins Pod Content - https://drive.google.com/drive/folders/1_iNb2RYwHUisypEjkrbZ3nFoBK8k60COFollow Hodgwtins Podcast Everywhere -X - https://x.com/hodgetwinspodInstagram - https://www.instagram.com/hodgetwinspodcast/Facebook - https://www.facebook.com/thehodgetwinsYouTube - https://www.youtube.com/@HodgetwinsPodcastRumble - https://rumble.com/c/HodgetwinsPodcast?e9s=src_v1_cmdSpotify - https://open.spotify.com/show/79BWPxHPWnijyl4lf8vWVuApple - https://podcasts.apple.com/us/podcast/hodgetwins-podcast/id1731232810

    AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

    In this episode, we discuss Runway's recent $315 million Series E funding round, which has boosted their valuation to $5.3 billion. We also explore their focus on 'world models' for advanced AI video generation and the evolving regulatory landscape surrounding deepfakes.Chapters03:24 Understanding AI World Models05:41 Runway's Technology & Edge10:22 Deepfake Regulations & Ethics13:40 Competition & Future Outlook LinksGet the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

    IBM Analytics Insights Podcasts
    Taming the Messy Data Reality: Turning AI Training Chaos into an $80T IP Asset Class with Andrea Muttoni, President and CPO of Story

    IBM Analytics Insights Podcasts

    Play Episode Listen Later Feb 11, 2026 43:14


    Send a textTackling the messy reality of data fueling artificial intelligence, Andrea Muttoni—President & CPO at Story—joins the show to unpack how Story is building an AI-native infrastructure for intellectual property and training data. We dig into making the $80T IP asset class programmable, traceable, and monetizable, and how Story aims to turn “mysterious training data blobs” into transparent rights and payments for creators and enterprises.01:10 Meet Andrea Muttoni 06:49 Story's Core Mission 13:41 IP Monetization 21:08 Biggest Competitor 22:49 Compute, Models, & Data 27:46 What to IP, Where Not 31:16 Blockchain 34:54 Protecting Your IP 41:36 Reaching StoryAndrea explains how Story is building a blockchain-based IP and data layer so AI systems can train on licensed content while proving usage, enforcing licenses, and automating payments to rights holders. We talk about the practical challenges of cleaning and labeling real-world data, what “IP-safe” datasets look like in practice, and how developers and companies can plug into Story's infrastructure. Andrea also shares where blockchain actually adds value (and where it doesn't), why he thinks “AI can't scale on legal ambiguity,” and concrete steps creators and founders can take today to protect and monetize their IP in the AI era.LinkedIn: linkedin.com/in/muttoni Website: https://www.story.foundation/#AITrainingData, #IntellectualProperty, #IPEconomy, #StoryProtocol, #DataInfrastructure, #AIGovernance, #AILaw, #Web3, #Blockchain, #CreatorEconomy, #DataOwnership, #RightsManagement, #Licensing, #TechPodcast, #Developers, #MachineLearning, #AIEthics, #DataMonetizationWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

    Making Data Simple
    Taming the Messy Data Reality: Turning AI Training Chaos into an $80T IP Asset Class with Andrea Muttoni, President and CPO of Story

    Making Data Simple

    Play Episode Listen Later Feb 11, 2026 43:14


    Send a textTackling the messy reality of data fueling artificial intelligence, Andrea Muttoni—President & CPO at Story—joins the show to unpack how Story is building an AI-native infrastructure for intellectual property and training data. We dig into making the $80T IP asset class programmable, traceable, and monetizable, and how Story aims to turn “mysterious training data blobs” into transparent rights and payments for creators and enterprises.01:10 Meet Andrea Muttoni 06:49 Story's Core Mission 13:41 IP Monetization 21:08 Biggest Competitor 22:49 Compute, Models, & Data 27:46 What to IP, Where Not 31:16 Blockchain 34:54 Protecting Your IP 41:36 Reaching StoryAndrea explains how Story is building a blockchain-based IP and data layer so AI systems can train on licensed content while proving usage, enforcing licenses, and automating payments to rights holders. We talk about the practical challenges of cleaning and labeling real-world data, what “IP-safe” datasets look like in practice, and how developers and companies can plug into Story's infrastructure. Andrea also shares where blockchain actually adds value (and where it doesn't), why he thinks “AI can't scale on legal ambiguity,” and concrete steps creators and founders can take today to protect and monetize their IP in the AI era.LinkedIn: linkedin.com/in/muttoni Website: https://www.story.foundation/#AITrainingData, #IntellectualProperty, #IPEconomy, #StoryProtocol, #DataInfrastructure, #AIGovernance, #AILaw, #Web3, #Blockchain, #CreatorEconomy, #DataOwnership, #RightsManagement, #Licensing, #TechPodcast, #Developers, #MachineLearning, #AIEthics, #DataMonetizationWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

    The Vergecast
    Could the Trump Phone be a good phone?

    The Vergecast

    Play Episode Listen Later Feb 10, 2026 74:36


    The Trump Phone is real! Ish! The Verge's Dom Preston has seen a T1 on a video call, that we can say for sure. Dom joins the show to explain what's new about the phone, whether it has a chance to be a decent device, and why it's taken so long for Trump Mobile to ship the thing. After that, The Verge's Hayden Field explains the excitement around OpenClaw and Moltbook, and whether either one is a big moment for the AI industry. Finally, The Verge's Andy Hawkins helps us answer a question on the Vergecast Hotline (866-VERGE11) about whether, and when, Tesla might get out of the car business altogether. Further reading: This is the Trump Phone⁠ ⁠The Trump Phone no longer promises it's made in America⁠ ⁠600,000 Trump Mobile phones sold? There's no proof.⁠ ⁠OpenClaw: all the news about the trending AI agent ⁠ ⁠OpenClaw's AI ‘skill' extensions are a security nightmare ⁠There's a social network for AI agents, and it's getting weird ⁠Humans are infiltrating the social network for AI bots ⁠Tesla discontinuing Model S and Model X to make room for robots⁠ Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The Neuron: AI Explained
    Why Energy-Based Models Could Be the Next Big Shift in AI

    The Neuron: AI Explained

    Play Episode Listen Later Feb 10, 2026 55:39


    Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn't have to work that way?In this episode of The Neuron, we're joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn't rely on language, tokens, or next-word prediction.With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.You'll learn:What energy-based models are (in plain English)Why token-free architectures change how AI reasonsHow EBMs reduce hallucinations through constraints and verificationWhy EBMs and LLMs may work best together, not in competitionWhat this approach reveals about the future of AI systemsTo learn more about Eve's work, visit https://logicalintelligence.com.For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.

    HBO Girls Rewatch
    Sex and the City S1E2: "Models and Mortals" with Tessa Belle

    HBO Girls Rewatch

    Play Episode Listen Later Feb 10, 2026 70:57


    Tessa Belle returns to Girls Rewatch for a chaotic, thoughtful, and extremely necessary conversation about whether models should, in fact, be treated as people. We're diving into Sex and the City Season 1, Episode 2, “Models and Mortals,” the show's early takedown of the beauty industrial complex. Evan, Amelia, and Tessa unpack why SATC is somehow wildly woke and deeply problematic at the exact same time and why that tension is part of the magic that keeps us rewatching. Learn more about your ad choices. Visit megaphone.fm/adchoices

    ASGCT Podcast Network
    The Issue - Evolving Patient Access to CGTs via Innovation in Medicaid Reimbursement Models with Melissa Majerol

    ASGCT Podcast Network

    Play Episode Listen Later Feb 10, 2026 35:34


    In this episode, Melissa Majerol, Model Lead of the Cell and Gene Therapy Access Model at the Center for Medicare and Medicaid Innovation (CMMI) at CMS, breaks down how the CMMI Access Model program aims to simplify and accelerate patient access to innovative cell and gene therapies, starting with recently approved therapies in Sickle Cell disease. Listen in as we dive into the specifics of this new access model and its current status. Music: ‘Bright New Morning’ by Steven O’Brien – released under CC-BY 4.0. https://www.steven-obrien.net/Show your support for ASGCT!: https://asgct.org/membership/donateSee omnystudio.com/listener for privacy information.

    Autoline Daily - Video
    AD #4230 - New Design House Styles Ferrari's 1st EV; Robot Gymnastics Linked to Hyundai Stock Surge; Tesla Details Semi with 1,000+ Horsepower

    Autoline Daily - Video

    Play Episode Listen Later Feb 10, 2026 8:56


    - Cadillac Dealers Predict Tesla Conquests as Model S and X End Production - Tesla Semi Specs Revealed: 1,050 HP Class 8 Eyes Commercial Domination - Hyundai Stock Surges as Atlas Robot Learns Backflips and Cartwheels - UAW Confirms Ford Worker Who Harassed Trump Has No Discipline Record - 72% of German Suppliers Plant to Move Investments Abroad - Honda Earnings Collapse on Tariff Hit and EV Write Off - Honda Restructures: Software and ICE Teams Merge to Survive Asia Slump - Ferrari Posts Strong 2025 Earnings - Ferrari Turns to New Design House for Its 1st Electric Car

    Autoline Daily
    AD #4230 - New Design House Styles Ferrari's 1st EV; Robot Gymnastics Linked to Hyundai Stock Surge; Tesla Details Semi with 1,000+ Horsepow

    Autoline Daily

    Play Episode Listen Later Feb 10, 2026 8:41 Transcription Available


    - Cadillac Dealers Predict Tesla Conquests as Model S and X End Production - Tesla Semi Specs Revealed: 1,050 HP Class 8 Eyes Commercial Domination - Hyundai Stock Surges as Atlas Robot Learns Backflips and Cartwheels - UAW Confirms Ford Worker Who Harassed Trump Has No Discipline Record - 72% of German Suppliers Plant to Move Investments Abroad - Honda Earnings Collapse on Tariff Hit and EV Write Off - Honda Restructures: Software and ICE Teams Merge to Survive Asia Slump - Ferrari Posts Strong 2025 Earnings - Ferrari Turns to New Design House for Its 1st Electric Car

    Today in Manufacturing
    Tesla Kills 2 Models; Honda, GM End Fuel Cell Project; First Brands CEO Scandal | Today in Manufacturing Ep. 255

    Today in Manufacturing

    Play Episode Listen Later Feb 9, 2026 52:02


    The Today in Manufacturing Podcast is brought to you by the editors of Manufacturing.net and Industrial Equipment News (IEN).This week's episode is brought to you by Epicor. Did you know that:70% of companies say generative AI has either already disrupted their business, is starting to disrupt their business, or will have significant impact in the next 18 months.32% of companies view AI as the most important technology in their organization today.28% of companies will pay more for AI as they consider it the most essential in their SaaS application.A new report, “AI and Manufacturing: How AI Is Reshaping Manufacturing Strategies,” tells you how to implement AI and increase time to value with AI. Download the report right now.Every week, we cover the biggest stories in manufacturing, and the implications they have on the industry moving forward. This week:- Former First Brands CEO Patrick James and His Brother Are Indicted for Bilking Billions from Banks- Honda, GM to End U.S. Manufacturing Joint Venture This Year- Tesla to Close Down Production of Two Car ModelsIn Case You Missed It- Waymo Gets $16 Billion Injection- Trump Administration to Create a Strategic Reserve for Rare Earth Elements- Porsche May Kill These 2 Models Before They Even Hit the MarketPlease make sure to like, subscribe and share the podcast. You could also help us out a lot by giving the podcast a positive review. Finally, to email the podcast, you can reach any of us at David, Jeff or Anna [at] ien.com, with “Email the Podcast” in the subject line.

    The Underpowered Hour
    Land Rover Tariffs, New Electric Models, and a 1951 Series I Purchase

    The Underpowered Hour

    Play Episode Listen Later Feb 9, 2026 28:04


    Hosts Steve Beres and Ike Goss discuss a range of topics including Land Rover's impending $520 million tariff hit in the U.S., the introduction of new electric models like the electric Velar replacement and Evoke, and a deep dive into the acquisition of a vintage 1951 Land Rover Series I. They also delve into skiing accidents affecting Land Rover influencers, their upcoming travels, and future plans for vehicle restorations.

    Becker’s Healthcare Podcast
    Rethinking Nursing Workforce Models at Duke Health

    Becker’s Healthcare Podcast

    Play Episode Listen Later Feb 8, 2026 15:50


    In this episode, Theresa McDonnell, DNP, RN, Chief Nursing Executive at Duke University Health System, discusses the workforce math problem facing healthcare and how nursing roles are evolving. She shares how team-based care, virtual nursing, and innovation units are reshaping care delivery and supporting nurse well-being.

    Waveform: The MKBHD Podcast
    Tesla Gives Up on Model S & X

    Waveform: The MKBHD Podcast

    Play Episode Listen Later Feb 6, 2026 112:25


    This week, the crew sits down and immediately jump into the new Studio video that just dropped which documented all of 2025! After that, they discuss the news that Tesla is discontinuing the Model S and Model X. After that, they try to figure out what OpenClaw is before playing another game that Andrew made up! It's a fun one. Enjoy! Shop the merch: https://shop.mkbhd.com Links: 9to5Google - Qi2 S26 cases Tesla - Production and deliveries AppleInsider - Apple removes Pro app subscription MoltBook Wired - Nintendo Switch virtual boy Verge - Abxylute Switch controllers Music provided by: Epidemic Sound Social: Waveform Threads: https://www.threads.net/@waveformpodcast Waveform Instagram: https://www.instagram.com/waveformpodcast/?hl=en Waveform TikTok: https://www.tiktok.com/@waveformpodcast Hosts: Marques: https://www.threads.net/@mkbhd Andrew: https://www.threads.net/@andrew_manganelli David: https://www.threads.net/@davidimel Adam: https://www.threads.net/@parmesanpapi17 Ellis: https://twitter.com/EllisRovin Mariah: https://www.instagram.com/totallynotabusinessacc Join the Discord: https://discord.gg/mkbhd Intro/Outro music by 20syl: https://bit.ly/2S53xlC Waveform is part of the Vox Media Podcast Network. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 708: Inside the Society of Agents: Why AI Teamwork Beats Bigger Models

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Feb 6, 2026 32:55


    Catalog & Cocktails
    It's Friday, Juan and Tim rant about Decisions, Context, MCP and Maturity Models

    Catalog & Cocktails

    Play Episode Listen Later Feb 6, 2026 27:38


    Juan and Tim grab a beer and rant about decision intelligence and context graphs, MCP vs Skills, and how companies really have a work problem (not a data/AI problem) and what is the maturity model to get that work doneSee omnystudio.com/listener for privacy information.

    Autonocast
    #356: Abandoned by FSD: Tesla's Pivot, Waymo's Crash, and the New Autonomy Narrative War

    Autonocast

    Play Episode Listen Later Feb 6, 2026 59:15


    Alex recounts his latest cross-country Tesla Full Self-Driving (FSD) attempt and explains why “zero-disengagement” claims often hide major differences in what counts as an intervention—plus an unforgettable moment where the car nearly strands a co-driver at a sub-zero truck stop. Kirsten, Ed & Alex then dig into Tesla's decision to end Model S and Model X production, the company's escalating bet on Optimus humanoid robots, and growing signals of deeper alignment with xAI (and even potential mega-merger vibes with SpaceX). Plus the latest Waymo controversy after a robotaxi struck a child in Santa Monica, the investigations and media narrative battle, and what these incidents mean for public trust in autonomous vehicles.

    The Scoot Show with Scoot
    New Orleans-born punk new wave "The Models" make 45 years!

    The Scoot Show with Scoot

    Play Episode Listen Later Feb 6, 2026 8:19


    Nostalgia overload! Michael Ciravolo joins the show to spill details on his New Orleans-based band The Models' start in the very early 80's - and their 45th anniversary at Jimmy's this weekend

    The Scoot Show with Scoot
    Full Show 2/6/2026: It's time to get Mad (hatters)

    The Scoot Show with Scoot

    Play Episode Listen Later Feb 6, 2026 93:27


    Roger has lost his ever lovin' mind with these deals on Mardi Gras gear; Who's worse, local drivers or out of town drivers? New Orleans-born punk new wave "The Models" make 45 years! Peter Noone aka Herman is still rockin with the Hermits; LACOSTE IN THE HEEZY

    Accidental Tech Podcast
    677: I Accept the Battery Cost

    Accidental Tech Podcast

    Play Episode Listen Later Feb 5, 2026 121:32


    Pre-show: Icepocalypse update Follow-up: Casey’s unit conversions are helpful… ish Other authentication options for John’s Cloudflare apps Cloudflare Access Zero trust architecture OpenID connect tsidp Pocket ID Per-pixel lighting and human perception (via David Schaub) The state of MicroLED Robert Tait of The Hook Up Claude Code & AI ethics Alec’s video Irish wind farm Top Dogs in Apple Vision Pro Marco’s experiment Cortex DuckDuckGo Kagi Apple adds agentic coding to Xcode 26.3 John’s Tahoe icon battle continues Rubber duck debugging Kyle Hughes Drop shipping Post-show: Tesla kills off the Model S and Model X Most recent earnings Earnings call transcript The Robotaxi is dangerous according to… Tesla We really need to get to self-driving Waymo’s safety record Members-only ATP Overtime: Apple’s rumored AI pin Apple buys Q.ai Ryan Chistoffel Sponsored by: Gusto: Payroll and benefits software built for small businesses. Masterclass: Learn from the world's best. Video lessons that inspire. Factor: Healthy Eating, Made Easy. Get 50% off your first box, plus free breakfast for 1 year. Become a member for ATP Overtime, ad-free episodes, member specials, and our early-release, unedited “bootleg” feed!

    CryptoNews Podcast
    #515: Matt Ober, Managing Partner at Social Leverage, on Prediction Markets, Consumption-based Data Models, Stablecoins, Tokenization, Identifying Trends, and The Degenerate Economy

    CryptoNews Podcast

    Play Episode Listen Later Feb 5, 2026 30:15


    Matt Ober is a Managing Partner at Social Leverage, an early stage venture capital firm focused on investing in fintech and enterprise SaaS. The firm is best well know for being early investors in Robinhood, Etoro, Alpaca, & Kustomer which was acquired by Meta. Mat was most recently the Chief Data Scientist at Third Point. Prior to joining Third Point, Matt was the Head of Data Strategy at WorldQuant and part of the WorldQuant Ventures founding team focused on private investments in fintech, data, and technology companies. Matt is the founder of InitialDataOffering.com which is the largest community of data buyers and data vendors. Matt holds a Chartered Alternative Investment Analyst (CAIA) designation and sits on the board of governors for his alma mater, California State University Chico. In this conversation, we discuss:- Identifying trends - Prediction Markets allowing people to monetize their niche - The degenerate economy & financial nihilism - Never been a better time to build a business - Tokenization of all assets - Stablecoins - Consumption-based data models - Early-stage investing - How wealth management technology is changing - How AI is influencing all startups Social LeverageX: @SocialLeverageWebsite: www.socialleverage.comLinkedIn: Social LeverageMatt OberX: @obermattjLinkedIn: Matt Ober---------------------------------------------------------------------------------This episode is brought to you by PrimeXBT.PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.  PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50FollowApple PodcastsSpotifyAmazon MusicRSS Feed

    DH Unplugged
    DHUnplugged #789: Crash Test For Dummies

    DH Unplugged

    Play Episode Listen Later Feb 4, 2026 65:40


    WORST DAY EVER for SILVER Cold Snap in Florida – Massive Critter Drop New Fed Chair named Pausing on space PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Interactive Brokers  Warm-Up - WORST DAY EVER for SILVER - Cold Snap in Florida - Massive Critter Drop - New Fed Chair named - Pausing on space Markets - Bitcoin plunges - Crypto "winter" - Deep dive into January economic results - USD rises from multi-month low - EM still powered ahead - ELON - PT Barnum move Cold Snap - On February 1, 2026, Florida faced a significant drop in temperatures, reaching a record low of 24°F (-4°C) in Orlando. This marked the lowest temperature recorded in February since 1923. - Iguanas dropping from tress all over the streets - Iguanas can survive temperatures down to the mid-40s Fahrenheit (around 7°C) by entering a "cold-stunned" state, where they appear dead but are just temporarily paralyzed and immobile; however, prolonged exposure to temperatures in the 30s and 40s, especially below freezing, can be lethal, particularly for smaller individuals, leading to tissue damage and organ failure. - They get sluggish below 50°F (10°C) and fall from trees as they lose grip. - The Florida Fish and Wildlife Conservation Commission (FWC) issued Executive Order 26-03 on Friday, allowing residents to collect and surrender cold-stunned green iguanas without a permit during an unprecedented cold weather event. Right on Schedule - Remember we talked about how the Nat Gas price was going to reverse, just as quickly as it spikeed? - Nat gas down 25% today - down about 28% from recent high - Still about 50% higher than it was before the spike. THIS! - Nvidia Corp. Chief Executive Officer Jensen Huang said the company's proposed $100 billion investment in OpenAI  was “never a commitment” and that the company would consider any funding rounds “one at a time.” - “It was never a commitment,” Huang told reporters in Taipei on Sunday. “They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.” Then Oracle announced that it will do a fundraiser in the form of equity and debt - needs to fund more datacenter build-out. - What happened to the OpenAI $300 Billion committment? - Or is the money that NVDA "committed to OpenAi, that they must have committed to Orcle, not a committment - GIGANTIC CIRCLE JERK Fungus - -Interesting - Did you know? Botrytis cinerea, a fungus causing grey mold, affects grapes by causing bunch rot, ruining fruit in high humidity. - While it often destroys crops, specific dry, warm conditions can transform it into "noble rot," concentrating sugars and creating high-value dessert wines (e.g., Sauternes, Tokaji) with honeyed, raisin-like, and apricot flavors. January Economic Review Employment — Job growth was nearly flat in December, with 50,000 new jobs added and earlier months revised lower. — Unemployment dipped slightly to 4.4%, but it's still higher than it was a year ago. — Long-term unemployment didn't change and remains high, and the labor force participation rate slipped to 62.4%. — Average hourly earnings rose 0.3% in December and are up 3.8% over the past year. — Weekly jobless claims stayed close to last year's levels, showing a labor market that is cooling but not weakening sharply. FOMC / Interest Rates — The Federal Reserve kept interest rates unchanged at 3.50%–3.75%. — Most policymakers agreed the economy continues to grow at a solid pace, though job gains are slowing and inflation remains above target. — Two committee members supported a small rate cut, but the majority preferred to wait. - Fed Chair Powell: Clearly, a weakening labor market calls for cutting. A stronger labor market says that rates are in a good place. It isn't anyone's base case right now that the next move will be a rate hike. - The economy has once again surprised us with its strength. Consumer spending numbers overall are good, and it looks like growth overall is on a solid footing. - Upside risks to inflation and downside risks to employment have diminished, but hard to say they are fully in balance. We think our policy is in a good place. - Overall, it's a stronger forecast since the Fed's last meeting. Haven't made any decisions about future meetings, but the economy is growing at a solid pace, the unemployment rate is broadly stable and inflation remains somewhat elevated, so we will be looking to our goal variables and letting the data light the way for us. - Most of the overrun in goods prices is from tariffs. We think tariffs are likely to move through, and be a one-time price increase. - Dissent: Miran and Waller (Miran is a admin shill and Waller wanted job as Fed Chair) GDP & Federal Budget — Economic growth remained strong in Q3 2025, with GDP rising at an annualized 4.4% driven by strong spending, higher exports, and reduced imports due to tariffs. — Investment was mixed, with business spending increasing while housing activity declined. — The federal deficit for December rose to $145 billion, though the fiscal year-to-date deficit is slightly smaller than last year. Inflation & Consumer Spending — Personal income and consumer spending rose moderately in October and November. — Inflation, measured by the PCE index, increased 0.2% in both months and roughly 2.7% year-over-year. — The Consumer Price Index rose 0.3% in December, with shelter, food, and energy all contributing. — Producer prices also increased, though 2025 producer inflation slowed compared to 2024. Housing — Existing home sales rose in December, but the number of homes for sale is still low. — Prices dipped a bit from November but remain higher than they were a year ago. — New-home sales in October were steady compared with the prior month but much higher than last year. — New-home prices fell compared to 2024, though they are still high relative to long-term norms. Manufacturing — Industrial production rose 0.4% in December and was up 2.0% for the year. — Manufacturing output increased, while mining activity declined and utility output jumped. — Durable goods orders grew sharply in November, driven by a big increase in transportation equipment, pointing to strong demand in key industries. Imports & Exports — Import and export prices rose slightly through November 2025. — The goods trade deficit widened in November because exports fell while imports increased. — For the year so far, both exports and imports are running above 2024 levels, though the overall trade deficit remains larger. Consumer Confidence — Consumer confidence fell sharply in January after improving in December. — Both views of current conditions and expectations for the future weakened, with expectations dropping well below the level that often signals recession risk. Earnings — Roughly one-third of S&P 500 companies have reported Q4 earnings, and overall results are strong. — 75% of companies have beaten EPS estimates, though this is slightly below long-term averages. Revenue beats remain solid at 65%. — Companies are reporting earnings 9.1% above estimates, which is well above the 5-and 10-year surprise averages. — The S&P 500 is on track for 11.9% year-over-year earnings growth, marking the 5th straight quarter of double-digit earnings growth. — Eight of eleven sectors are showing positive year-over-year earnings growth, led by Information Technology, Industrials, and Communication Services. — The Health Care sector shows the largest earnings declines among lagging categories. — The forward 12-month P/E ratio sits at ~22.2, elevated relative to 5-and 10-year averages, signaling continued optimism despite tariff and cost concerns. — FactSet also notes the S&P 500 is reporting a record-high net profit margin of 13.2%, the highest since 2009. INTERACTIVE BROKERS Check this out and find out more at: http://www.interactivebrokers.com/   S3XY No More - Tesla is ending production of the Model S sedan and Model X crossover by the end of Q2 2026 to focus on autonomous technology and humanoid robots (Optimus). - Do we have any idea with the TAM for either of these are? - Huge assumptions that Robotaxi will be a bug part of the global transportation. But, what if it isn't? - Unproven being built, taking out the proven - investors were not too happy about this...Stock was down after earnings showed continued sluggish EV sales and BIG Capex for Robotaxi refit, robots and chip manufacturing. But... - Friday - not to allow TESLA stock to move down tooo much. - With SpaceEx looking for an IPO in June - valuations have moved from $800B to 1.5T supposedly. - Now there is discussion of merging in xAI and possibly Tesla - Tesla shares dropped after earnings FED CHAIR PICK - Drumroll: Kevin Warsh - Seems like a good pick from the aspect of experience and ability - Deficit reducer? - More hawkish than market expected? - Announce Friday after several leaks in the morning And then... - Silver futures plummeted 31.4% to settle at $78.53, marking its worst day since March 1980. -It was down 35% during the day - the worst daily plunge ever on record. - It was the worst decline since the March 1980 Hunt Brothers crash. - The sharp moves down were initially triggered by reports of Warsh's nomination. - However, they gained steam in afternoon U.S. trading as investors who piled into the metals raced to book profits.- USD Spiked higher - Gold was down 10% - GOLD saw a drop of 10% to the close - 12% intraday - this was also a record - Bitcoin is down 25% from its recent level 2 weeks ago - ALL BEING BLAMED ON THE FED CHAIR PICK -- QUESTION - Will Trump back-peddle this OR talk to supporters in congress or tell them not to confirm him if markets continue to act squirrely? Fed Statement and Rates - Fed out with statement - no change on rates - Changes: Inflation up, employment steady, economy strong - Does not bode for much in the way of cuts - probably on hold though end of Powell term Apple Earnings - Apple reported blowout first-quarter earnings on Thursday, and predicted growth of as much as 16% in the current quarter, matching the period that just ended. - Sales could be even better, Apple said, if the company just secure enough chips to meet its customers' iPhone demands. - The company reported $42.1 billion in net income, or $2.84 per share, versus $36.33 billion, or $2.40 per share, in the year-ago period. - Apple saw particularly strong results in China, including Taiwan and Hong Kong. Sales in the region surged 38% during the quarter to $25.53 billion. - “The constraints that we have are driven by the availability of the advanced nodes that our SoCs are produced on, and at this time, we're seeing less flexibility in supply chain than normal,” Apple CEO Tim Cook said. - Stock up slightly - no great moves.... Blue Origin - Blue Origin will pause tourist flights to space for “no less than two years” to prioritize development of its moon lander and other lunar technologies. - The decision reflects Blue Origin's commitment to the nation's goal of returning to the Moon and establishing a permanent, sustained lunar presence. - The pause in tourist flights grounds the company's reusable New Shepard rocket, which has sent more than 90 people to the edge of space and back to experience brief periods of weightlessness. - Datacenters on the Moon? (sounds like a Pink Floyd album)     Love the Show? Then how about a Donation? ANNOUNCING THE WINNER OF THE THE CLOSEST TO THE PIN CUP 2025 Winners will be getting great stuff like the new "OFFICIAL" DHUnplugged Shirt!     FED AND CRYPTO LIMERICKS   See this week's stock picks HERE Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter

    Big Technology Podcast
    AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

    Big Technology Podcast

    Play Episode Listen Later Feb 4, 2026 54:51


    Joelle Pineau is the chief AI officer at Cohere. Pineau joins Big Technology Podcast to discuss where the cutting edge of AI research is headed — and what it will take to move from impressive demos to reliable agents. Tune in to hear why memory, world models, and more efficient reasoning are emerging as the next big frontiers, plus what current approaches are missing. We also cover the “capability overhang” in enterprise AI, why consumer assistants still aren't lighting the world on fire, what AI sovereignty actually means, and whether the major labs can ever pull away from each other. Hit play for a cool-headed, deeply practical look at what's next for AI and how it gets deployed in the real world. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

    Tech45
    #727: Ruimtepuin en maneschijn

    Tech45

    Play Episode Listen Later Feb 4, 2026 80:41


    Technieuws Apple heeft een goed Q1 achter de rug Tesla stopt productie van Model S en Model X om te focussen op robots Musk’s SpaceX Combines With xAI at $1.25 Trillion Valuation Artemis II staat klaar om te lanceren Belgische school slachtoffer van ransomware, hackers eisen ook losgeld ouders LG pauzeert productie van 8K TV’s, geen enkele fabrikant produceert meomenteel 8K toestellen How we'll build the device database, together Reportage Monopoly X, het spel dat krijgsgevangen liet ontsnappen Deep dive Satellieten hangen in verschillende banen rond de Aarde Timeline of stalliete and space probes Starlink Amazon Leo IRIS² Outer Space Treaty Data centers in space Solar harvesting in space

    The Smoking Tire
    We Almost Rolled the INEOS Wagon

    The Smoking Tire

    Play Episode Listen Later Feb 3, 2026 120:46


    Matt Farah and Zack Klapman went off-roading in the new 2026 INEOS Grenadier Station Wagon Trialmaster and it almost went very wrong; a full review of the INEOS and its competitors; Tesla is ending production of the Model S and X, so we talk about why and what it means for the future; Washington D.C. is going to have an INDY street race apparently; and we answer 50 questions from our Patreon members, including: Would we rather be mid-tier racing drivers or have our current jobs?Which supercar should have had a diesel engine?Miata RF or soft-top?Best M5: E90 M5 or E90 M3?Most enthusiast-magnet nerd car?Are "classic" tires worth the price for my older car?Which cars does every enthusiast need to drive?More timeless design: Lexus GS or C6 Corvette?Do we like bigger wheels? When?Were early paddle-shifted transmissions cool when they came out?That new Porsche Rally GTWhy does the S63 AMG feel like too much AMG and not enough S?F1 2026: Audi vs Cadillac. Thoughts?Which cars depreciate TOO fast?Why don't STIs hold value?Options that made cars unreliableWould you always choose a lighter engine in a race car?Foreign car brands that would work hereAnd many more!Recorded February 2, 2026TruewerkGet 15% off your first order at https://www.truewerk.com using CODE: TIRE. . FitbodJoin Fitbod today to get your personalized workout plan. Get 25% off your subscription or try the app FREE for seven days at https://www.Fitbod.me/TIRE. Want your question answered? Want to watch the live stream, get ad-free podcasts, or exclusive podcasts? Join our Patreon: https://www.patreon.com/thesmokingtirepodcast Use Off The Record! and ALWAYS fight your tickets! Enter code TST10 for a 10% discount on your first case on the Off The Record app, or go to http://www.offtherecord.com/TST. Watch our car reviews: https://www.youtube.com/thesmokingtire Tweet at us!https://www.Twitter.com/thesmokingtirehttps://www.Twitter.com/zackklapman Instagram:https://www.Instagram.com/thesmokingtirehttps://www.Instagram.com/therealzackklapman

    Know Thyself
    E181 - Joscha Bach: A Cognitive Scientist's Guide to Consciousness & The Illusion of Reality

    Know Thyself

    Play Episode Listen Later Feb 3, 2026 87:01


    Joscha Bach explores the nature of consciousness, free will, and reality through the lens of computation, cognitive science, and philosophy. Rather than treating the mind as a mystical entity, Joscha frames consciousness as a constructed dream—a model generated by the brain to make sense of the world and coordinate behavior.We examine why beliefs should remain provisional, how the self functions as a useful fiction, and why suffering emerges when internal learning signals misfire. Joscha explains why free will feels real even if decisions arise before awareness, how meaning exists beyond the individual ego, and why wisdom is not simply knowledge but the ability to orient oneself within larger systems of value.BON CHARGE - 15% off red light therapy products I personally usehttps://www.boncharge.com/knowthyself[Code: KNOWTHYSELF]André's Book Recs: https://www.knowthyselfpodcast.com/book-list___________00:00 — Intro: Josha Bach04:24 — Agnosticism, Evidence, and Logical Alternatives11:20 — Reality as a Mental Simulation13:00 — What Physicalism Actually Claims16:55 — Telepathy, Rituals, and Distributed Minds19:45 — Consciousness Does Not Make Decisions22:55 — Free Will as a Post-Hoc Story24:00 — Consciousness as a Trance State26:00 — Meditation and the Illusion of Self29:10 — Out-of-Body Experiences Explained31:07 — Ad: BON CHARGE36:30 — Why the Brain Fills in Missing Reality39:50 — Dreams, Selves, and Narrative Identity43:20 — Intelligence, Models, and World-Building47:10 — Why Reality Feels Stable51:00 — Meaning, Agency, and Mental Compression55:10 — Why Consciousness Feels Central (But Isn't)59:30 — The Psychological World vs Physical Reality1:04:10 — Intelligence Without Awareness1:08:45 — The Cost of Believing the Self Is Real1:13:30 — Waking Up From the Narrative1:18:40 — What a Cognitive Science View Really Implies1:23:30 — Final Thoughts: Living Inside the Dream___________Episode Resources: https://www.cimc.ai/https://www.instagram.com/andreduqum/https://www.instagram.com/knowthyself/https://www.youtube.com/@knowthyselfpodcasthttps://www.knowthyselfpodcast.comListen to the show:Spotify: https://spoti.fi/4bZMq9lApple: https://apple.co/4iATICX

    Car Stuff Podcast
    Telsa Kills Models, Mitsubishi Outlander Review, Best Cars of 2026

    Car Stuff Podcast

    Play Episode Listen Later Feb 3, 2026 57:39


    Jill and Tom sift through an enormous amount of news this week, including the announced deaths of four car models, including two from Tesla. Other news items covered include a name for Ford's new electric small truck, BMW's new electric-car architecture, and the unexpected comeback of the minivan. Still in the first segment, Jill reviews the 2026 Mitsubishi Outlander small crossover. Listen in for her take. In the second segment the hosts welcome Brian Moody of Kelly Blue Book to the show. Brian shares his take on the new-car market in 2026, as well as KBB's Best Buy picks. In the third segment Jill is subjected to Tom's “How Red is It?” Quiz. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    The Data Center Frontier Show
    Google Cloud on Operationalizing AI: Why Data Infrastructure Matters More Than Models

    The Data Center Frontier Show

    Play Episode Listen Later Feb 3, 2026 32:26


    In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data. While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos. The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls. Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay. Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them. The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure.

    China EVs & More
    Episode #236 - Tesla Kills Icons, China EVs Surge, and the Awards That Define the Year

    China EVs & More

    Play Episode Listen Later Feb 3, 2026 50:56 Transcription Available


    In Episode 236, Tu and Lei deliver one of their most wide-ranging and revealing conversations yet—covering Tesla's strategic retreat from cars, China's accelerating dominance in EVs, autonomy, and robotics, and unveiling the inaugural China EVs & More Awards - the EViesThe episode opens with Tesla's bombshell earnings call: the Model S and Model X are effectively retired, revenues decline for a second straight year, yet the stock rallies on promises of robotaxis, robotics, and AI abundance. Tu and Lei explain why Wall Street is betting on a future Tesla that is no longer a car company—and why China's crowded robotaxi and robotics markets make that future far less certain than investors believe.They contrast Tesla's promises with reality on the ground in China, where BYD, NIO, XPeng, Huawei, Geely, and Xiaomi are rapidly upgrading ADAS, launching new models, and redefining value. The discussion highlights how Western media is only now “discovering” vehicles like the Xiaomi SU7 and YU7, despite Chinese OEMs offering Model 3/Y-level features at half the price.The second half of the episode introduces the China EVs & More Awards, recognizing the companies, products, and people that defined the year—while exposing who fell behind. From Zombie Company of the Year to EV of the Year, the awards spark debate around survival, execution, and scale in the world's most competitive auto market.The episode closes with a sober look at automation, delivery, labor displacement, and UBI, asking whether autonomy will ultimately create abundance—or social shock—across global mobility systems.Insightful, provocative, and data-driven, this episode explains why China EV Inc. is no longer the future—it's the present.___

    Turn Down for Watt
    Watts In The News - Tesla's HUGE Mistake Why They're Moving Too Fast!

    Turn Down for Watt

    Play Episode Listen Later Feb 3, 2026 14:22


    This week on Watts in the News, we break down one of the biggest shifts in Tesla's history. Tesla has officially canceled the Model S and Model X, ending new sales as production lines at Fremont are converted to build Optimus humanoid robots. The move signals a major pivot toward AI, robotics, Full Self-Driving (FSD), and Robotaxi, raising big questions about Tesla's long-term direction.We're joined by Tesla legends and YouTubers Bearded Tesla Guy and JoshWest 24/7, who bring firsthand perspective from years of Tesla ownership, long-distance road trips, and a recent FSD drive across the United States. Together, we discuss whether Tesla is moving too fast, whether the company is shifting away from personally owned vehicles, and what this means for future products like Robotaxi, Cybercab, and even the Cybertruck potentially operating autonomously for cargo and logistics.➡️ ⁨Tesla Conversation With  @BeardedTeslaGuy  and  @JoshWest247  : https://youtu.be/XS3rPj-LPI4➡️ Jessie's Trip to Sweden with  @kempower  : https://youtu.be/A-ifvVG_qPc➡️ BowefamilyEV Silverado, Lightning at Gravity at the Tesla V4 Chargers! : https://youtu.be/SHyH59MANtYWe also cover the importance of charging infrastructure at scale. Co-host Jessie shares insights from his upcoming trip to Sweden with Kempower, where he'll see megawatt charging in action, and we discuss why megawatt-level charging is critical for heavy-duty EVs. Plus, we break down Tesla's megawatt charging agreement with Pilot and what it means for the future of electric trucking in North America.From Tesla's vehicle cancellations to Optimus, autonomy, and charging at massive scale — this episode tackles the question: Is Tesla building the future too early, or exactly on time?

    The Millionaire Real Estate Agent | The MREA Podcast
    120. From Hustle Plateau to Scaling With a Model With Alison Harris

    The Millionaire Real Estate Agent | The MREA Podcast

    Play Episode Listen Later Feb 2, 2026 39:06


    Watch the full episode on our YouTube channel: youtube.com/@mreapodcastMost people want change. Very few are willing to follow a model long enough to earn it.In this episode, we sit down with Alison Harris, a real estate agent out of Savannah, Georgia, who made a bold decision in 2021 to relaunch her entire business from the ground up. Forty months later, she crossed the million-dollar GCI mark by committing to a clear model and running it with discipline.Alison walks us through the Six Personal Perspectives and how each one showed up in her real life, not as theory, but as daily behavior. We unpack what it really means to commit to self-mastery, why the 80/20 principle gave her time back, and how moving from entrepreneurial to purposeful changed everything.This is not a story about a magic lead source or a shiny new system. Alison is clear about that. The growth came from magic in the mundane—building a five-star database, running a consistent touch program, holding people accountable, and committing to a learning-based business even when it was uncomfortable.If you've ever felt capped by hustle, stuck under a ceiling, or frustrated that effort isn't translating into freedom, this conversation delivers a conversational framework you can apply immediately.No hype. No shortcuts. Just a proven path, run the right way.Resources:Explore: BOLD at Keller WilliamsLearn: The Six Personal Perspectives by Gary KellerDownload: MREA Podcast Notes and Models at mreanotes.com Order the Millionaire Real Estate Agent Playbook | Volume 3Connect with Jason:LinkedinProduced by NOVAThis podcast is for general informational purposes only. The views, thoughts, and opinions of the guest represent those of the guest and not Keller Williams Realty, LLC and its affiliates, and should not be construed as financial, economic, legal, tax, or other advice. This podcast is provided without any warranty, or guarantee of its accuracy, completeness, timeliness, or results from using the information.WARNING! You must comply with the TCPA and any other federal, state or local laws, including for B2B calls and texts. Never call or text a number on any Do Not Call list, and do not use an autodialer or artificial voice or prerecorded messages without proper consent. Contact your attorney to ensure your compliance.

    Rain City Supercars
    Bring Back Stereo Dolphins!

    Rain City Supercars

    Play Episode Listen Later Feb 2, 2026 51:16


    Bring back stereo dolphins! We think there's a market for simplified vehicles that will outlast you with proper maintenance, and we think diesel, non-hybrid, trucks with no built-in infotainment, and manual options would be just the thing. Doug DeMuro disagrees, but do you? It turns out about 1/2 a million Washington drivers have expired tabs. In a surprise to no one living in the area of constantly increasing taxes and costs, more drivers than ever are skipping tab renewals. We often forget the tabs on our motorcycles and trailers, but what about you? Did you hear this episode then suddenly realize your tabs are still from 2021? Living in that RTA zone and it just doesn't make sense to renew them for you? Tesla pulls the plug on the Model S and Model X; is it a sign of decreasing demand for EVs?  Questions for the audience this week!  Is simplicity something you want and would actually pay for? Is there an actual market for vehicles like the 70 series or is it just a nice idea?  What vehicles is close to perfect, except for 1 or 2 glaringly obvious flaws like the Prelude?   What other manufacturers are obviously missing the mark on their models besides the Honda Prelude? Did you "forget" to renew your tabs? Is it just too expensive now?  The Avants Podcast is brought to you by our friends at STEK USA and Carter Seattle! Not an Avants member? https://www.avants.com/member-plans Leave us a review on Apple Podcasts!  Leave us a voicemail or send us a text any time at 425-298-7873! We're doing give aways! Leave us a review on Apple Podcasts and we'll pick a random name every 25th review!

    Ride the Lightning: Tesla Motors Unofficial Podcast
    Episode 548: Model S and X Discontinued in Earnings Call Shocker

    Ride the Lightning: Tesla Motors Unofficial Podcast

    Play Episode Listen Later Feb 1, 2026 123:33


    Tesla's Q4 2025 earnings call opens with news that is somehow both not surprising and shocking all at the same time: the Model S and Model X are being discontinued. It's the end of an era for Tesla as the company looks to redefine itself in the coming decade. I've got so much to say about this, plus more highlights and analysis from the call. Join me! If you enjoy the podcast and would like to support my efforts, please check out my Patreon at https://www.patreon.com/teslapodcast and consider a monthly or (10% discounted!) annual pledge. Every little bit helps, and you can support for just $5 per month. And there are stacking bonuses in it for you at each pledge level, like early access to each episode at the $5 tier and the weekly Lightning Round bonus mini-episode (AND the early access!) at the $10 tier! And NO ADS at every Patreon tier! Also, don't forget to leave a message on the Ride the Lightning hotline anytime with a question, comment, or discussion topic for next week's show! The toll-free number to call is 1-888-989-8752. INTERESTED IN A FLEXIBLE EXTENDED WARRANTY FOR YOUR TESLA? Be a part of the future of transportation with XCare, the first extended warranty designed & built exclusively for EV owners, by EV owners. Use the code Lightning to get $100 off their "One-time Payment" option! Go to www.xcelerateauto.com/xcare to find the extended warranty policy that's right for you and your Tesla. P.S. Get 15% off your first order of awesome aftermarket Tesla accessories at AbstractOcean.com by using the code RTLpodcast at checkout. Grab the SnapPlate front license plate bracket for any Tesla at https://everyamp.com/RTL/ (don't forget the coupon code RTL too!). And enhance your car with cool carbon-fiber upgrades from RPMTesla.com and use the promo code RTL10+ for 10% off your next purchase.

    Grumpy Old Geeks
    731: I Want My 13 Trillion Dollars!

    Grumpy Old Geeks

    Play Episode Listen Later Jan 30, 2026 79:24


    We kick off FOLLOW UP by checking in on Elon Musk's personal dumpster fire, where the EU is investigating Grok for deepfake slop while Tesla's “unsupervised” robotaxis turned out to be supervised by literal chase cars — shocker. At least some of you are getting Siri settlement crumbs in your bank accounts, though you could probably double it betting against Musk's worthless promises on Polymarket.Transitioning to IN THE NEWS, Tesla is killing off the Model S and X to build robots while sales crater, proving that mixing hard-right politics with EV sales is a brilliant move for the balance sheet. Meanwhile, the corporate bloodbath continues with massive layoffs at Ubisoft, Vimeo (courtesy of the Bending Spoons buzzsaw), and Amazon, because “removing bureaucracy” is apparently HR-speak for 16,000 families losing their livelihoods. If that's not enough, Google is settling yet another privacy suit for $135 million, the EU is threatening to weaponize its tech sovereignty against the US, and the Trump administration wants Gemini to write federal regulations—because if there's one thing we want drafting airline safety rules, it's a hallucinating chatbot.Still IN THE NEWS, Waymo is under federal investigation for passing school buses and hitting children, while South Korea's new AI laws manage to please absolutely no one. Record labels are suing Anna's Archive for a cool $13 trillion—roughly three times the GDP of India—and the Winklevoss twins have finally admitted that NFTs are dead by shuttering Nifty Gateway.We pivot to MEDIA CANDY, where the Patriots and Seahawks are heading to Super Bowl 60, and the Winter Olympics are descending on Milan. We're doing the math on the Starfleet Academy timeline, celebrating the return of Ted Lasso and Shrinking, and trying to decide if Henry Cavill is the second coming of Timothy Dalton in the Highlander reboot. Plus, Jessica Jones is back in the Daredevil: Born Again trailer, and Colin Farrell's Sugar is returning to explain that wild noir twist we all totally saw coming.In APPS & DOODADS, the TikTok Armageddon is upon us as the new US owners break the app and drive everyone to UpScrolled, while Native Instruments enters insolvency, leaving our music-making dreams in restructuring limbo. Apple is dropping AirTag 2 with precision finding for your watch, which is great for finding the keys you lost while doom-scrolling.We wrap up with THE DARK SIDE WITH DAVE, featuring the new Muppets trailer and Steve Whitmire's deep thoughts on the state of the felt, plus a look at the artisans in Disneyland Handcrafted. Finally, Looney Tunes finds a new home on Turner Classic Movies, proving that the classics never die—they just move to a cable channel your parents actually watch. Dave finally learns about the Insta360 camera, a countertop dishwasher but no Animal Crackers, and a guide to gas masks and googles... for no particular reason.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/731Watch the episode at https://youtu.be/B54je_oJWjMFOLLOW UPThe EU is investigating Grok and X over potentially illegal deepfakesPeople on Polymarket Are Making a Fortune by Betting Against Elon Musk's Famously Worthless PromisesElon Musk Made Tesla Fans Think Unsupervised Robotaxis Had Arrived. They Can't Find ThemTesla Quietly Pauses Its “Unsupervised” Robotaxi Rides as Reality Sets InApple Siri settlement payments hitting bank accounts. What to know.IN THE NEWSTesla bet big on Elon Musk. His politics continue to haunt it.With Tesla Revenue and Profits Down, Elon Musk Plays Up SafetyTesla Kills Models S and XUbisoft proposes even more layoffs after last week's studio closures and game cancellationsVimeo lays off most of its staff just months after being bought by private equity firmAmazon Laying Off 16,000 as It Increases ‘Ownership' and Removes ‘Bureaucracy'Report Says the E.U. Is Gearing Up to Weaponize Europe's Tech Industry Against the U.S.Google will pay $135 million to settle illegal data collection lawsuitGDPR Enforcement TrackerNTSB will investigate why Waymo's robotaxis are illegally passing school busesWaymo robotaxi hits a child near an elementary school in Santa MonicaVideo shows Waymo vehicle slam into parked cars in Echo ParkTrump admin reportedly plans to use AI to write federal regulationsSouth Korea's ‘world-first' AI laws face pushback amid bid to become leading tech powerSpotify and Big 3 Record Labels Sue Anna's Archive for $13 Trillion (!) Alleging TheftAmazon converting some Fresh supermarkets, Go stores to Whole Foods locationsSEC agrees to dismiss case over crypto lending by Winklevoss' GeminiWinklevoss Twins Shut Down NFT Marketplace in Another Sign Crypto Art Is DeadMEDIA CANDYPlur1busShrinkingA Knight of the Seven KingdomsStealHow to watch the 2026 Super Bowl: Patriots vs. Seahawks channel, where to stream and moreWinter Olympics: How to watch, schedule of events, and everything else you need to know about the 2026 Milano Cortina gamesWait, So When Is 'Starfleet Academy' Set, Anyway?The First ‘Daredevil: Born Again' Season 2 Trailer Brings Back Jessica JonesMarvel Television's Daredevil: Born Again Season 2 | Teaser TrailerTed Lasso Gets Kicked Back to Apple TVThere Can Only Be One First Look at the ‘Highlander' RebootColin Farrell's Detective Show ‘Sugar' Will Finally Have to Address that Wild Twist This SummerAPPS & DOODADSTikTok Is Now Collecting Even More Data About Its Users. Here Are the 3 Biggest ChangesTikTok users freak out over app's 'immigration status' collection — here's what it meansTikTok's New US Owners Are Off to a Very Rocky StartTikTok Data Center Outage Triggers Trust Crisis for New US OwnersYes, TikTok is still broken for many peopleSocial network UpScrolled sees surge in downloads following TikTok's US takeoverNative Instruments enters into insolvency proceedings, leaving its future uncertainWispr FlowAirTag 2: Three tidbits you might have missedTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingThe Muppet Show | Official Trailer | Disney+Steve Whitmire, former Kermit the Frog performer, has written a long, thoughtful piece about the current stae of the Muppets.Disneyland Handcrafted‘Looney Tunes' Has Found a New Home: Turner Classic MoviesThe Dark Side of Scooby DooA Disturbing (Yet Convincing) Theory Reveals There Were Never Any "Monsters" In Scooby DooCartoon Conspiracy Theory | Scooby Doo and The Gang Are Draft Dodgers?!Producing A Multi-Person Interview With An Insta360 CameraA listener on Mastodon pointed out that The Verge had a story on countertop dishwashersA Demonstrator's Guide to Gas Masks and GogglesEmma RepairsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    The Vergecast
    Tim Cook is destroying his own legacy

    The Vergecast

    Play Episode Listen Later Jan 30, 2026 95:11


    We've been covering what's happening in Minnesota, and the killing of Alex Pretti, all week on The Verge. To begin this episode, Nilay explains why — and why so many others seem to feel the same way right now. After that, the hosts talk about the CEO-studded screening of Melania Trump's documentary last weekend, the disastrous public appearance from Tim Cook, and whether Cook and other CEOs have any other option but to capitulate to the Trump administration. Then it's time for some gadgets: we talk about the super-foldy, super-expensive Samsung Galaxy  Z Trifold, the Clawdbot / Moltbot phenomenon, and whether Google can finally put Chrome OS and Android together the right way. Finally, in the lightning round, it's time for Brendan Carr is a dummy, Tesla's anti-car pivot, Apple's design hires, and more. Further reading: On the ground in Minneapolis after the killing of Alex Pretti  I grew up with Alex Pretti  Creators and communities everywhere take a stand against ICE  It doesn't matter if Alex Pretti had a gun  Why won't anyone stop ICE from masking?  Tim Cook, Andy Jassy, and AMD CEO Lisa Su are at the White House for a VIP screening of the Melania doc. Tim Cook had ‘a good conversation' with Trump about deescalation  Cook in 2020: Speaking up on racism From The New York Times: Amazon's $35 Million ‘Melania' Promotion Has Critics Questioning Its Motives From The Hollywood Reporter: ‘Melania' Set for a $3 Million Opening Despite Amazon's $35 Million Marketing Push Here's Tim Cook hanging out with accused rapist Brett Ratner at the Melania screening What TikTok's new owners mean for your feed  TikTok USA is broken  TikTok is still down, here are all the latest updates  TikTok is still struggling in the US due to a “cascading systems failure.”  TikTok US is mostly back up and running  TikTok blames its US problems on a power outage  Oracle admits it broke TikTok. Congress doesn't seem to know if the TikTok deal complies with its law  Is New TikTok banning the word “Epstein” in DMs? Not really.  TikTokers are heading to UpScrolled following US takeover  Mark Zuckerberg is all in on AI as the new social media  Meta is stopping teens from chatting with its AI characters  Bluesky is testing ‘live' features to take on X  Best gas masks The Samsung Trifold will cost nearly three grand  Google just leaked a first look at Android for PC in action  Chromebooks train schoolkids to be loyal customers, internal Google document suggests  Moltbot, the AI agent that ‘actually does things,' is tech's new obsession Clawdbot's bad day  I used Claude to vibe-code my wildly overcomplicated smart home The FCC's Late Night Comedy Show Tesla discontinuing Model S and Model X to make room for robots  Tesla says production-ready Optimus robot is coming soon  Tesla hits a grim milestone: its second straight year of decline Elon Musk invests $2 billion in Elon Musk Hang on, there's a Trump Phone Ultra coming too?  Halide co-founder Sebastiaan de With is joining Apple's design team  The Stream Deck-packed gaming keyboard is a monster of good ideas Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The John Batchelor Show
    S8 Ep389: PREVIEW FOR LATER TODAY Guest: Bob Zimmerman. Zimmerman observes that while European nations like Germany are slowly adopting private space enterprise models, they remain years behind American commercial innovation.

    The John Batchelor Show

    Play Episode Listen Later Jan 30, 2026 1:22


    PREVIEW FOR LATER TODAY Guest: Bob Zimmerman. Zimmerman observes that while European nations like Germany are slowly adopting private space enterprise models, they remain years behind American commercial innovation.1957

    Daily Tech News Show
    Project Genie Makes 3D “World Models” On-Demand - DTNS 5196

    Daily Tech News Show

    Play Episode Listen Later Jan 30, 2026 27:24


    Apple announced big gains for the iPhone 17 last quarter while its services slowed, and Rabbit is back, teasing a specialized device designed for vibe coding.Starring Jason Howell and Jenn CutterShow notes can be found here. Hosted on Acast. See acast.com/privacy for more information.

    Motley Fool Money
    Tesla's Daring Move

    Motley Fool Money

    Play Episode Listen Later Jan 29, 2026 22:36


    For several years, Tesla has been straddling the fence between an electric vehicle manufacturer and its ambition to pursue autonomous driving and humanoid robots. This most recent quarterly report looks like the sign that the company has picked a side. Plus, the ups and downs of Meta's and Microsoft's earnings. Tyler Crowe, Matt Frankel, and Jon Quast discuss: - Tesla's earnings - Elon Musk's announcement that Tesla will discontinue production of the Model S and X. - Meta's massive capital spending plan - Microsoft's future getting closely tied to OpenAI - Stocks on our radar Companies discussed: TSLA, META, MSFT, GOOG, LUV, AAON, BMI Host: Tyler Crowe Guests: Matt Frankel, Jon Quast Engineer: Dan Boyd Disclosure: Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, “TMF”) do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. We're committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode. Learn more about your ad choices. Visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices

    Business Casual
    Tesla Scraps High-End Car Models & Starbucks is Heating Up

    Business Casual

    Play Episode Listen Later Jan 29, 2026 29:30


    Episode 768: Neal and Toby recap the Fed's interest rate meeting where Chair Jerome Powell kept things steady citing an active economy. Then, a rundown of Big Tech's earnings, starting with Tesla scrapping its car models, Meta's full speed ahead with AI, and Microsoft's mixed results. Plus, Amazon announces mass layoffs to focus on its AI investments. Meanwhile, Starbucks is picking up steam on its comeback tour. Also, Neal shares his favorite numbers on the Canadian ski boycott, London's startup scene, and WD-40's secret formula.  Get your tickets for the Morning Brew Variety Show! https://tinyurl.com/MBvariety  Learn more about Sandals at sandals.com  Subscribe to Morning Brew Daily for more of the news you need to start your day. Share the show with a friend, and leave us a review on your favorite podcast app. Listen to Morning Brew Daily Here:⁠ ⁠⁠https://www.swap.fm/l/mbd-note⁠⁠⁠  Watch Morning Brew Daily Here:⁠ ⁠⁠https://www.youtube.com/@MorningBrewDailyShow⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Steve Harvey Morning Show
    Uplift: Discussing the career of Dr. Gladys West whose mathematical models are the backbone of GPS and military systems.

    The Steve Harvey Morning Show

    Play Episode Listen Later Jan 29, 2026 27:06 Transcription Available


    Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Dr. Jacque Rushin and Robyn Donaldson. Below is a polished, thorough summary of the interview featuring Jacque Rushin and Robyn Donaldson discussing the career and legacy of Dr. Gladys West with Rushion McDonald—along with its purpose, key takeaways, and notable quotes, all drawn directly from the transcript.(Citations reference the uploaded file.) Summary of the Interview On Money Making Conversations Masterclass, Rushion McDonald welcomes Dr. Jacque Rushin (award‑winning business executive, educator, mental health professional, humanitarian) and Robyn Donaldson (2025 Presidential Lifetime Achievement Award honoree for global STEM education) to discuss their celebration of Dr. Gladys B. West, a pioneering mathematician whose work laid the foundation for the GPS (Global Positioning System). The conversation explores the intersection of Juneteenth, Black excellence, STEM education, and Dr. West’s life story, captured in her memoir It Began with a Dream. The guests highlight Dr. West as one of America’s last living “hidden figures”—a brilliant yet historically overlooked Black woman whose mathematical genius revolutionized everyday life. They detail how Dr. West rose from sharecropper roots, excelled academically at Virginia State University, earned her master’s and PhD, spent 39 years contributing to government research, and ultimately developed the algorithms and modeling processes that power GPS. They also describe their collaborative effort to create the Westward Bound Program, a life‑skills and STEM‑focused curriculum inspired by Dr. West’s principles of wisdom, endurance, strategy, and precision. Through humorous, emotional, and deeply insightful dialogue, the episode uplifts Dr. West’s accomplishments while discussing mental health, technology dependence, the importance of exposure to STEM pathways for underserved youth, and how the legacy of Black innovators must remain central in cultural celebrations like Juneteenth. Purpose of the Interview 1. To honor and amplify Dr. Gladys West’s legacy She is a living mathematical pioneer whose GPS contributions transformed global navigation and modern technology. 2. To connect her story to Juneteenth’s spirit of liberation and recognition The guests highlight the “delayed recognition” of Black innovators and the importance of acknowledging hidden figures whose brilliance shaped society. 3. To promote STEM exposure in underserved communities Robyn Donaldson emphasizes equitable access to STEM opportunities so children can compete in a global, tech‑driven world. 4. To introduce and promote the Westward Bound Program The curriculum teaches STEM principles, life skills, and personal development inspired by Dr. West’s methodologies. 5. To highlight themes of resilience, humility, and lifelong learning Dr. West’s quiet determination and academic persistence serve as a blueprint for young people and adults alike. Key Takeaways 1. Dr. Gladys West is a “living hidden figure.” Her research and mathematical modeling are the backbone of GPS, impacting navigation, transportation, military systems, and everyday digital tools. 2. Her journey exemplifies brilliance shaped by humility and hard work. Born in 1930 to sharecropper parents, she excelled academically despite segregation, pursued multiple degrees, and overcame racial and gender barriers in government research settings. 3. Juneteenth is the perfect backdrop for honoring Dr. West. Jacque stresses that Juneteenth represents “delayed freedom,” paralleling the delayed recognition of Black inventors and innovators. 4. STEM exposure is vital to equity. Robyn insists that Black children are fully capable of STEM success—they simply lack exposure, not aptitude. Without STEM skills, young people risk being left behind in a robotics‑driven economy. 5. Technology should complement—not replace—human thinking. Jacque cites Dr. West’s personal preference for physical maps over GPS to maintain cognitive sharpness and critical thinking, a warning about over‑dependence on AI and automation. 6. The Westward Bound Program bridges STEM, life skills, and personal development. Built on the acronym “WEST”—Wisdom, Endurance, Strategy, Tracking—the program supports youth, adults, and entrepreneurs seeking direction and resilience. 7. Mentorship, community, and relationships are central themes. Dr. West’s success was nurtured by professors and role models at her HBCU—mirroring how Jacque and Robyn now uplift the next generation. 8. Her story resonates globally and intergenerationally. From college students to young children to adults, the principles from her memoir and program promote self‑belief, vision, discipline, and perseverance. Notable Quotes (All taken directly from the transcript.) On Dr. West’s impact “She’s a living hidden figure… her accomplishments have actually changed our way of living in every discipline of life.” “Her technology… makes these things possible.” On Juneteenth and recognition “Juneteenth is about the delayed freedom of African Americans… and what Dr. West represents is the quiet, often overlooked brilliance that changes the world.” On STEM access “Our kids are not pursuing high‑paying STEM careers, not because of their aptitude, but simply because they have not been exposed.” On Dr. West’s genius “You don’t have to be loud to be a legacy.” “She is just so humble, but she’s just brilliant. She’s like a mathematical genius.” On technology & mental health “She didn’t want to lose her critical thinking by depending on GPS… everything has a place, and it should complement you, not take over.” On resilience & aspiration “You have to believe there is something greater than what you’re standing in.” “From sharecropper to pioneer—you can be someone from humble beginnings and change the world.” #SHMS #STRAW #BESTSupport the show: https://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.

    Strawberry Letter
    Uplift: Discussing the career of Dr. Gladys West whose mathematical models are the backbone of GPS and military systems.

    Strawberry Letter

    Play Episode Listen Later Jan 29, 2026 27:06 Transcription Available


    Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Dr. Jacque Rushin and Robyn Donaldson. Below is a polished, thorough summary of the interview featuring Jacque Rushin and Robyn Donaldson discussing the career and legacy of Dr. Gladys West with Rushion McDonald—along with its purpose, key takeaways, and notable quotes, all drawn directly from the transcript.(Citations reference the uploaded file.) Summary of the Interview On Money Making Conversations Masterclass, Rushion McDonald welcomes Dr. Jacque Rushin (award‑winning business executive, educator, mental health professional, humanitarian) and Robyn Donaldson (2025 Presidential Lifetime Achievement Award honoree for global STEM education) to discuss their celebration of Dr. Gladys B. West, a pioneering mathematician whose work laid the foundation for the GPS (Global Positioning System). The conversation explores the intersection of Juneteenth, Black excellence, STEM education, and Dr. West’s life story, captured in her memoir It Began with a Dream. The guests highlight Dr. West as one of America’s last living “hidden figures”—a brilliant yet historically overlooked Black woman whose mathematical genius revolutionized everyday life. They detail how Dr. West rose from sharecropper roots, excelled academically at Virginia State University, earned her master’s and PhD, spent 39 years contributing to government research, and ultimately developed the algorithms and modeling processes that power GPS. They also describe their collaborative effort to create the Westward Bound Program, a life‑skills and STEM‑focused curriculum inspired by Dr. West’s principles of wisdom, endurance, strategy, and precision. Through humorous, emotional, and deeply insightful dialogue, the episode uplifts Dr. West’s accomplishments while discussing mental health, technology dependence, the importance of exposure to STEM pathways for underserved youth, and how the legacy of Black innovators must remain central in cultural celebrations like Juneteenth. Purpose of the Interview 1. To honor and amplify Dr. Gladys West’s legacy She is a living mathematical pioneer whose GPS contributions transformed global navigation and modern technology. 2. To connect her story to Juneteenth’s spirit of liberation and recognition The guests highlight the “delayed recognition” of Black innovators and the importance of acknowledging hidden figures whose brilliance shaped society. 3. To promote STEM exposure in underserved communities Robyn Donaldson emphasizes equitable access to STEM opportunities so children can compete in a global, tech‑driven world. 4. To introduce and promote the Westward Bound Program The curriculum teaches STEM principles, life skills, and personal development inspired by Dr. West’s methodologies. 5. To highlight themes of resilience, humility, and lifelong learning Dr. West’s quiet determination and academic persistence serve as a blueprint for young people and adults alike. Key Takeaways 1. Dr. Gladys West is a “living hidden figure.” Her research and mathematical modeling are the backbone of GPS, impacting navigation, transportation, military systems, and everyday digital tools. 2. Her journey exemplifies brilliance shaped by humility and hard work. Born in 1930 to sharecropper parents, she excelled academically despite segregation, pursued multiple degrees, and overcame racial and gender barriers in government research settings. 3. Juneteenth is the perfect backdrop for honoring Dr. West. Jacque stresses that Juneteenth represents “delayed freedom,” paralleling the delayed recognition of Black inventors and innovators. 4. STEM exposure is vital to equity. Robyn insists that Black children are fully capable of STEM success—they simply lack exposure, not aptitude. Without STEM skills, young people risk being left behind in a robotics‑driven economy. 5. Technology should complement—not replace—human thinking. Jacque cites Dr. West’s personal preference for physical maps over GPS to maintain cognitive sharpness and critical thinking, a warning about over‑dependence on AI and automation. 6. The Westward Bound Program bridges STEM, life skills, and personal development. Built on the acronym “WEST”—Wisdom, Endurance, Strategy, Tracking—the program supports youth, adults, and entrepreneurs seeking direction and resilience. 7. Mentorship, community, and relationships are central themes. Dr. West’s success was nurtured by professors and role models at her HBCU—mirroring how Jacque and Robyn now uplift the next generation. 8. Her story resonates globally and intergenerationally. From college students to young children to adults, the principles from her memoir and program promote self‑belief, vision, discipline, and perseverance. Notable Quotes (All taken directly from the transcript.) On Dr. West’s impact “She’s a living hidden figure… her accomplishments have actually changed our way of living in every discipline of life.” “Her technology… makes these things possible.” On Juneteenth and recognition “Juneteenth is about the delayed freedom of African Americans… and what Dr. West represents is the quiet, often overlooked brilliance that changes the world.” On STEM access “Our kids are not pursuing high‑paying STEM careers, not because of their aptitude, but simply because they have not been exposed.” On Dr. West’s genius “You don’t have to be loud to be a legacy.” “She is just so humble, but she’s just brilliant. She’s like a mathematical genius.” On technology & mental health “She didn’t want to lose her critical thinking by depending on GPS… everything has a place, and it should complement you, not take over.” On resilience & aspiration “You have to believe there is something greater than what you’re standing in.” “From sharecropper to pioneer—you can be someone from humble beginnings and change the world.” #SHMS #STRAW #BESTSee omnystudio.com/listener for privacy information.