Podcasts about 3D

  • 25,726PODCASTS
  • 67,935EPISODES
  • 45mAVG DURATION
  • 10+DAILY NEW EPISODES
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about 3D

    Show all podcasts related to 3d

    Latest podcast episodes about 3D

    ASMR by GentleWhispering
    ~♥Valentine's Day Breakfast♥~

    ASMR by GentleWhispering

    Play Episode Listen Later Feb 13, 2026 13:10


    I apologize to any vegan viewers of mine I didn't mean to offend anyone. :)) I am sorry this is not a relaxation video, but I was practicing this recipe before the Valentine's day and decided to record it at the same time. :) I did this all without any preparation so it might not have came out perfect or relaxing.. but I still hope you will enjoy it and may be this could be useful for some :) .. I am working on some 3D sound videos :))) that I hope you will enjoy soon :) Thank you and Happy Valentine's Day! ♥ Amazon MP3https://www.amazon.com/s/ref=ntt_srch_drd_B01BAXDICM?ie=UTF8&field-keywords=GentleWhispering&index=digital-music&search-type=ssGoogle Play MP3https://play.google.com/store/music/artist/Gentlewhispering?id=Apc4txglf3f2siowzgqccttky5i&hl=enSpotify MP3https://play.spotify.com/artist/3gkB9Cdx4UuWQxjhelyd87?play=true&utm_source=open.spotify.com&utm_medium=openiTunes MP3https://itunes.apple.com/us/artist/gentlewhispering/id1077570705#see-all/top-songshttps://itunes.apple.com/us/artist/maria-gentlewhispering/id1048320316website: http://www.gentlewhispering.compaypal and email: maria@gentlewhispering.com#asmr #gentlewhispering2/12/13

    Beyond the Kill
    EP 603: Capital "H" Hunting with Craig Francis of Ultraview Archery

    Beyond the Kill

    Play Episode Listen Later Feb 13, 2026 110:47


    This episode features Craig Francis, the VP of Brand at Ultraview Archery, and a guy whose content, perspectives, and opinions are the furthest from mundane.  Craig's been in the hunting creative space for over 10 years and has worked with numerous brands over that timeframe, many that are household names in our community. He's a writer, photographer, and one of the more capable storytellers in the game.  Topics covered include the tension between building a brand that stands for something while also keeping an eye on the bottom line, what capital "H" hunting means, and of course, an inside look at the story behind Ultraview Archery and what's on the horizon for this upstart company that's trying to build a business that is focused very simply on helping you write and tell better stories.  NOTABLE QUOTES:  "That's literally the whole game plan. We got nothing else going on here. Great stuff. Take it to the field. Write your own story. Share it with your friends. You'll probably never talk about us, but we'll have been a small part. That's literally all it is."  @craigfranciscreative  @ultraviewarchery ---------------------------  DEALS & PARTNERS:  For over 100 years Leica has set the standard for premium optics. From spotting scopes to binoculars, rifle scopes and the new CRF MAX rangefinders, Leica is the choice for those who accept no compromises.  Don't miss out on Canada's best mountain hunting and conservation expo! The 2026 Wild Sheep Society of BC's Salute to Conservation Mountain Hunting Expo will sell out fast. Get your tickets now!  onX Hunt is the most powerful 3D mapping solution for hunters. Get your FREE trial today. If you're already a member, check out the exclusive offers and perks available when you upgrade to an Elite Member.   Tired of gut rotting instant coffee? Check out This Is Coffee and get yourself some great instant coffee for when you're in the backcountry or on the road.  ---------------------------  SUPPORT WILD SHEEP:  Go to Wild Sheep Foundation to find a membership option that suits your budget and commitment to wild sheep.  Go to Wild Sheep Society of BC to become a member, enter raffles, buy merch and support BC's wild sheep populations.  SUPPORT MOUNTAIN GOATS:  Go to Rocky Mountain Goat Alliance to find a membership option that suits your budget and commitment to conserving mountain goats and their habitat.   

    Hackaday Podcast
    Ep 357: BreezyBox, Antique Tech, and Defusing Killer Robots

    Hackaday Podcast

    Play Episode Listen Later Feb 13, 2026 66:51


    In the latest episode of the Hackaday Podcast, editors Elliot Williams and Tom Nardi start things off by discussing the game of lunar hide-and-seek that has researchers searching for the lost Luna 9 probe, and drop a few hints about the upcoming Hackaday Europe conference. From there they'll marvel over a miniature operating system for the ESP32, examine the re-use of iPad displays, and find out about homebrew software development for an obscure Nintendo handheld. You'll also hear about a gorgeous RGB 14-segment display, a robot that plays chess, and a custom 3D printed turntable for all your rotational needs. The episode wraps up with a sobering look at the dangers of industrial robotics, and some fascinating experiments to determine if a decade-old roll of PLA filament is worth keeping or not. Check out the links over on Hackaday if you want to follow along, and as always, tell us what you think about this episode in the comments!

    Episode 206: Big Hops

    "Fun" and Games Podcast

    Play Episode Listen Later Feb 13, 2026 57:50


    When traversing a 3D world in games, it's as important to enjoy your character's movement as it is to just enjoy your character. Matt & Geoff are joined by Chris Wade, director of the recently released froggy platformer, Big Hops, to discuss when to heed or ignore your influences, creating meaningful interactions, and finding the joy in going on a journey. You can find Chris on Bluesky and YouTube. You can check out Big Hops on Steam, Switch, and PS5 We have a Patreon! Gain access to episode shout outs, bonus content, early downloads of regular episodes, an exclusive rss feed and more! Click here! You can find the show on Bluesky, Instagram and YouTube! Please rate and review us on Apple Podcasts! Rate us on Pocket Casts! Wanna join the Certain POV Discord? Click here! Episode Art by Case Aiken Episode Music by Geoff Moonen  

    Ready 4 Pushback
    Ep. 322 No Shoes, No Shirt, No Job Offer

    Ready 4 Pushback

    Play Episode Listen Later Feb 12, 2026 16:32


    In this solo episode, Nik goes head to toe discussing interview attire that gets you hired, and getups that get you sent home before you even enter the interview room. Drawing from real-world hiring insights, he explains exactly what pilots should (and absolutely should not) wear, how to prepare for both in-person and virtual interview setups, and why true professionalism starts long before you answer your first interview question. From regionals to majors and everything in between, if you want to show up as a confident, detail-oriented professional and not sabotage your big shot, this episode is required listening.  CONNECT WITH US Are you ready to take your preparation to the next level? Don't wait until it's too late. Use the promo code "R4P2026" and save 10% on all our services. Check us out at www.spitfireelite.com! If you want to recommend someone to guest on the show, email Nik at podcast@spitfireelite.com, and if you need a professional pilot resume, go to www.spitfireelite.com/podcast/ for FREE templates! SPONSOR Are you a pilot just coming out of the military and looking for the perfect second home for your family? Look no further! Reach out to Marty and his team by visiting www.tridenthomeloans.com to get the best VA loans available anywhere in the US. Be ready for takeoff anytime with 3D-stretch, stain-repellent, and wrinkle-free aviation uniforms by Flight Uniforms. Just go to www.flightuniform.com and type the code SPITFIREPOD20 to get a special 20% discount on your first order. #Aviation #AviationCareers #aviationcrew #AviationJobs #AviationLeadership #AviationEducation #AviationOpportunities #AviationPodcast #AirlinePilot #AirlineJobs #AirlineInterviewPrep #flying #flyingtips #PilotDevelopment #PilotFinance #pilotcareer #pilottips #pilotcareertips #PilotExperience #pilotcaptain #PilotTraining #PilotSuccess #pilotpodcast #PilotPreparation #Pilotrecruitment #flightschool #aviationschool #pilotcareer #pilotlife #pilot

    Design Better Podcast
    Nate Koechly and Matthew Darby: YouTube's UX Director and Director of PM on redesigning one of the world's most-used apps

    Design Better Podcast

    Play Episode Listen Later Feb 12, 2026 43:22


    Redesigning one of the world's most-used apps is no small feat, especially when that app is also the second largest search engine in the world: YouTube. Over the last four years, Nate Koechly, UX Director at YouTube, and Matthew Darby, Director of Product Management, have been leading an ambitious effort to balance Google's metrics-driven culture with the subjective challenge of making an app feel “modern.” Visit our Substack for bonus content and more: https://designbetterpodcast.com/p/nate-koechly-and-matthew-darby In our conversation, Nate and Matt share how they developed predictive measurement tools to gauge user perception, why they pair visual updates with quality-of-life features like comment threading and improved video controls, and how their research process has evolved from measuring clicks to understanding satisfied watch time. We also dig into one of YouTube's most complex challenges: the algorithm. As Nate and Matt explain, what users say they want doesn't always match what actually makes them happy on the platform. They also discuss their work exploring ways to give viewers more agency and control, including the possibility of using natural language to tune your feed. Both guests have a genuine passion for how YouTube enables deep expertise and niche interests to find their audiences—from 3D models of the Golden Gate Bridge to forest fire education from Northern California lookouts. Behind the algorithms and design updates is a platform where, as Nate puts it, “when you give people a voice, the things they say are just inspiring.” *** Premium Episodes on Design Better This ad-supported episode is available to everyone. If you'd like to hear it ad-free, upgrade to our premium subscription, where you'll get an additional 2 ad-free episodes per month (4 total). Premium subscribers also get access to the documentary Design Disruptors and our growing library of books: You'll also get access to our monthly AMAs with former guests, ad-free episodes, discounts and early access to workshops, and our monthly newsletter The Brief that compiles salient insights, quotes, readings, and creative processes uncovered in the show. And subscribers at the annual level now get access to the Design Better Toolkit, which gets you major discounts and free access to tools and courses that will help you unlock new skills, make your workflow more efficient, and take your creativity further. Upgrade to paid *** If you're interested in sponsoring the show, please contact us at: sponsors@thecuriositydepartment.com If you'd like to submit a guest idea, please contact us at: contact@thecuriositydepartment.com

    The Spiritual Investor
    The Real Reason Your Money Feels Inconsistent

    The Spiritual Investor

    Play Episode Listen Later Feb 12, 2026 28:20


    Welcome back to The Spiritual Investor Podcast. In this episode, I'm talking about something that came up in one of our mastermind applications, the feeling that money comes and goes and doesn't feel steady. When money feels unstable, it can seem like something outside of you is controlling your access to abundance. Your company. The market. Your audience. A relationship. But nothing in the 3D world can actually block creation. Creation is the most powerful energy there is. And when you operate from certainty, you become a match to what you desire. This conversation is about certainty. About self expression. About stabilizing a new frequency. And about becoming the version of you who no longer waits for permission from the external world. In this episode, I explore: • Why money feels like it comes and goes • The illusion of external blocks to abundance • How control in relationships mirrors control with money • Operating from certainty without ego If you are ready to go deeper into this work in person, I am hosting a private event in San Luis Obispo April 15 through 17.  Visit thespiritualinvestor.com/live2026 to learn more.

    Epigenetics Podcast
    Decoding Cell Fate Through 3D Genome Organization and Chromatin Dynamics (Srinjan Basu)

    Epigenetics Podcast

    Play Episode Listen Later Feb 12, 2026 41:20


    In this episode of the Epigenetics Podcast, we talked with Srinjan Basu from Imperial College London to talk about his work on how chromatin architecture and epigenetic mechanisms orchestrate developmental gene expression programs. We begin by exploring Dr. Basu's early work at Harvard which involved pioneering Raman-based label-free imaging, allowing the study of chromatin dynamics in live tissue. Here, he tackles technical challenges faced in visualizing DNA interactions, emphasizing the shift from 2D to 3D analysis and the importance of real-time observation of chromatin behavior under various conditions. This segues into his groundbreaking research on single transcription factors interacting with chromatin, revealing subtle but significant changes in the dynamics of gene regulation. We transition into the complexities of chromatin architecture as Dr. Basu recounts his efforts in mapping the entire mouse genome in single pluripotent cells, unearthing unexpected heterogeneity among cells. This heterogeneity raises intriguing questions about its impact on cellular function, prompting ongoing investigations into chromatin dynamics and the role of remodeling complexes like NuRD in cell fate transitions. Dr. Basu elucidates how recent studies have begun to bridge the gaps in understanding how transcription factors and chromatin dynamics interact during cellular decisions, particularly emphasizing the influence of mechanical signals and the intrinsic properties of cells. His research underscores the idea that stem cells undergo a preparatory phase for differentiation, highlighting the critical balance of intrinsic and extrinsic factors that govern genetic expression and cellular outcomes. We also talk about Dr. Basu's current research trajectory, focusing on enhancing imaging techniques to study gene dynamics in tissue contexts relevant to developmental biology and disease states. He illustrates a vision for future projects that integrate advanced imaging tools to investigate transcription factor dynamics and chromatin interactions in live cells and embryos, furthering the understanding of decision-making processes in cellular contexts. References Stevens TJ, Lando D, Basu S, et al. 3D structures of individual mammalian genomes studied by single-cell Hi-C. Nature. 2017 Apr;544(7648):59-64. DOI: 10.1038/nature21429. PMID: 28289288; PMCID: PMC5385134. Basu S, Needham LM, Lando D, et al. FRET-enhanced photostability allows improved single-molecule tracking of proteins and protein complexes in live mammalian cells. Nature Communications. 2018 Jun;9(1):2520. DOI: 10.1038/s41467-018-04486-0. PMID: 29955052; PMCID: PMC6023872. Related Episodes Advanced Optical Imaging in 3D Nuclear Organisation (Lothar Schermelleh) Analysis of 3D Chromatin Structure Using Super-Resolution Imaging (Alistair Boettiger) Single-Molecule Imaging of the Epigenome (Efrat Shema) Contact Epigenetics Podcast on Mastodon Epigenetics Podcast on Bluesky Dr. Stefan Dillinger on LinkedIn Active Motif on LinkedIn Active Motif on Bluesky Email: podcast@activemotif.com

    Bourbon Showdown Podcast
    Whiskey JYPSI: Ari Sussman

    Bourbon Showdown Podcast

    Play Episode Listen Later Feb 12, 2026 70:22


    This week on The Bourbon Showdown Podcast, Jesse sits down with whiskey maker Ari Sussman of Whiskey JYPSI to pop the top and pour through the new LEGACY Batch 003: The Declaration.Jesse and Ari dive into the brand's history and explore how Ari and Eric Church have worked tirelessly since day one to craft a true bourbon symphony—creating a volumetric, 3D whiskey experience with every release. Legacy Batch 3 is a perfect showcase of that passion and precision. Along the way, Ari drops a few Easter eggs hidden within this release, some dating all the way back to Revolutionary times, as the two break down the mash bill, grains, barrels, and the meticulous blending that brings this whiskey to life.It's a flavor-packed, behind-the-scenes conversation filled with great stories, Eric Church moments, and some seriously memorable pours. So pour yourself a big pour of Whiskey JYPSI and get ready for this week's episode of The Bourbon Showdown Podcast.

    GovCast
    New NPS Challenge Aims to Rewrite How the Military Builds Missiles

    GovCast

    Play Episode Listen Later Feb 12, 2026 10:57


    War Department components are pivoting away from "exquisite" hardware in favor of agile, low-cost manufacturing methodologies. Part of this shift includes prize challenges like the Naval Postgraduate School's Tactical Missile Innovation Challenge, a non-traditional challenge that emphasizes methodology. Speaking at AFCEA/USNI WEST in San Diego, California, NPS Director of Research Innovation at the Naval Postgraduate School Kaitie Penry said that the Tactical Missile Innovation Challenge, unlike traditional military contracts that demand a hardware prototype, asks participants for a design methodology. The goal, she said, is to produce missiles at scale using 3D printing and commercial off-the-shelf parts for cost efficacy and scalability. Penry also said that the challenge aims to replace million-dollar assets with $5,000 alternatives that can be mass-produced, allowing the fleet to maintain a "quantity has a quality of its own" advantage for munitions.

    Comfort Zone
    I'm Holding Up Poop

    Comfort Zone

    Play Episode Listen Later Feb 12, 2026 67:37


    Matt wants to do an AI check in, Chris serves a master class on getting started with 3D printing, and Niléane is away, so the dads talk a little Formula 1…just a little. This week's Cozy Zone, Chris gets a brand glow up! Want more from the gang? Cozy Zone is a bonus podcast every Monday where we let loose on all sorts of fun topics. You can get cozy with the Comfort Zone crew for just $5/month or $50/year, which not only makes the bonus episodes possible, but supports Comfort Zone, too. How would you have done our challenges? How would you answer the question at the end of the show? Let us know! Things discussed Two wolves Claude Code OpenClaw Bambu Lab A1 (and A1 mini) Gridfinity MakerWorld Printables Thangs Follow the Hosts Chris on YouTube Matt on Birchtree Niléane on Mastodon Comfort Zone on Mastodon Comfort Zone on Bluesky

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

    Composites Weekly
    From Scan to Surgery: 3D-Printed Implants for Injured Soldiers in Ukraine

    Composites Weekly

    Play Episode Listen Later Feb 12, 2026 14:23


    On this episode, Nancy Hairston, CEO of MedCAD joins the show to discuss their innovative approach to the design and production of patient-matched medical devices using additive manufacturing. They’ve recently produced 3D-printed implants for wounded Ukrainian soldiers, an application where speed and accuracy can be life-changing. Their approach is 100% patient-customized, with every implant and […] The post From Scan to Surgery: 3D-Printed Implants for Injured Soldiers in Ukraine first appeared on Composites Weekly. The post From Scan to Surgery: 3D-Printed Implants for Injured Soldiers in Ukraine appeared first on Composites Weekly.

    The Newsmax Daily with Rob Carson
    3D Chess, C-17s & Crazy Cat Ladies

    The Newsmax Daily with Rob Carson

    Play Episode Listen Later Feb 11, 2026 42:18


    -Rob dives into global intrigue as C-17 cargo planes head toward the Middle East, Cuba teeters on fuel collapse, and he declares Trump is playing “3D chess” while the Ayatollah studies checkers. -On the Newsmax hotline, Philip Patrick of Birch Gold joins Rob to break down gold's rollercoaster ride—down $1,000, back up $500—and why central banks are stacking precious metals like it's Black Friday for bullion. Today's podcast is sponsored by : RELIEF FACTOR - You don't need to live with aches & pains! Reduce muscle & joint inflammation and live a pain-free life by visiting http://ReliefFactor.com  QUINCE CLOTHING - Refresh your wardrobe with Quince.  Go to http://Quince.com/NEWSMAX for free shipping on your order and 365-day returns. BIRCH GOLD - Protect and grow your retirement savings with gold. Text ROB to 98 98 98 for your FREE information kit! To call in and speak with Rob Carson live on the show, dial 1-800-922-6680 between the hours of 12 Noon and 3:00 pm Eastern Time Monday through Friday…E-mail Rob Carson at : RobCarsonShow@gmail.com Musical parodies provided by Jim Gossett (http://patreon.com/JimGossettComedy) Listen to Newsmax LIVE and see our entire podcast lineup at http://Newsmax.com/Listen Make the switch to NEWSMAX today! Get your 15 day free trial of NEWSMAX+ at http://NewsmaxPlus.com Looking for NEWSMAX caps, tees, mugs & more? Check out the Newsmax merchandise shop at : http://nws.mx/shop Follow NEWSMAX on Social Media:  -Facebook: http://nws.mx/FB  -X/Twitter: http://nws.mx/twitter -Instagram: http://nws.mx/IG -YouTube: https://youtube.com/NewsmaxTV -Rumble: https://rumble.com/c/NewsmaxTV -TRUTH Social: https://truthsocial.com/@NEWSMAX -GETTR: https://gettr.com/user/newsmax -Threads: http://threads.net/@NEWSMAX  -Telegram: http://t.me/newsmax  -BlueSky: https://bsky.app/profile/newsmax.com -Parler: http://app.parler.com/newsmax Learn more about your ad choices. Visit megaphone.fm/adchoices

    DoD Contract Academy
    The Hobby That Earns $200M in Government Contracts

    DoD Contract Academy

    Play Episode Listen Later Feb 11, 2026 10:39


    Have you ever wondered if you could get paid for something you already enjoy doing?In this video, I break down real examples of hobbies that the U.S. federal government is actively spending millions of dollars on every year. Most people think government contracting is only about defense systems, IT, or construction. The reality is very different. The federal government is the single largest buyer of goods and services in the world, and that includes areas most people would never expect.We're talking about government contracts tied to yoga instruction and wellness programs, ATV and off-road vehicle training, 3D printing and additive manufacturing, foreign language translation and interpretation, and even professional dog training services. These are not edge cases. These are recurring federal spending categories that create real opportunities for small businesses, consultants, and professionals who understand how federal procurement works.00:00 Can You Get Paid for Your Hobby? Government Contract Reality00:45 Yoga Government Contracts 02:05 ATV and Off-Road Vehicle Government Contracts 03:15 3D Printing | Additive Manufacturing 04:30 Foreign Language 06:30 Dog Training Government Contracts 09:00 How to Research Government Contracts on SAM.gov 09:30 How Government Contracting Actually Works 10:00 Three Ways to Make Money Using Government Contracting Expertise

    Heartbeat For Hire with Lyndsay Dowd
    189: The Money Patterns Quietly Running Your Leadership and Your Life with Elizabeth Ralph

    Heartbeat For Hire with Lyndsay Dowd

    Play Episode Listen Later Feb 11, 2026 34:33


    Elizabeth Ralph is a former energy trader turned spiritual teacher and high level wealth strategist. She retired at age 39 and is now the host of The Spiritual Investor Podcast (top 2%) and creator of The Spiritual Investor Program — a non-linear, frequency-based approach that blends financial strategy with energetic alignment. With a deep understanding of markets, money psychology, wealth management, and spiritual law, Elizabeth helps people build timeless wealth that creates true financial sovereignty from the inside out. Through her teachings, she guides others in collapsing time and building effortless wealth as a way of being. Her investing philosophy is Warren Buffet meets modern diversification. According to her, true financial freedom isn't bound to 3D structures. She lives her method daily, teaching through real stories, grounded strategies, and a deep reverence for cycles, energy, and choice. Her method has helped thousands shift from circumstance and survival-based money patterns into sovereignty, joy, and legacy wealth.   Summary:   In this episode of the Heartbeat For Hire Podcast, host Lyndsay Dowd welcomes Elizabeth Ralph, a former energy trader turned high-level wealth strategist who retired at age 39. Elizabeth is the creator of "The Spiritual Investor," a method that blends practical financial strategy with energetic alignment to help high performers move from a state of financial survival into true wealth sovereignty.   Socials:   Website: https://www.thespiritualinvestor.com/ IG: https://www.instagram.com/elizabethralph LinkedIn: https://www.linkedin.com/in/elizabethralph/ YouTube: https://www.youtube.com/@Elizabeth_Ralph   Key Takeaways:   Wealth as a Way of Being: True wealth is built through energetic alignment rather than a constant "grind" or struggle. Breaking Inherited Narratives: Many individuals unknowingly carry "money narratives" and scarcity fears inherited from their parents, which must be healed to achieve true financial freedom. Collapsing Time: Leaders can achieve faster, more effortless results by aligning their personal growth and spiritual practices with their financial actions. Practical Tools vs. Universal Field: Success requires pulling energy from the "universal field of money" and applying it directly to 3D actions like investing. Commitment Level: Transformation is tied to your level of commitment; Elizabeth notes significant shifts often occur by the third week of her intensive Mastermind programs. Episode Chapters: 00:00 Introduction: Overview of today's episode and guest Elizabeth Ralph. 01:53 Elizabeth's Story: Retiring at 39 and moving from energy trading to wealth strategy. 03:28 The Birth of The Spiritual Investor: How a meditation at a client's door changed Elizabeth's career path. 05:40 Combining Finance and Spirituality: Why high performers need more than just traditional financial advice. 07:00 Finding Your Level of Commitment: Understanding the different ways to work with Elizabeth, from clubs to masterminds. 11:11 Pulling Energy into 3D Results: Moving beyond "hope" to tangible financial action. 12:08 Healing Money Scarcity: Breaking free from inherited family narratives about wealth. 15:15 Reality vs. Media Narratives: A look at current economic truths versus the fear shared in the news. 24:58 Inspiration and Service: What drives Elizabeth to share her message and the legacy she hopes to leave.  

    Donos da Razão
    #344 - Fomos humilhados pela I.A. ft Mabe

    Donos da Razão

    Play Episode Listen Later Feb 11, 2026 59:10


    As fofocas da viagem nunca acabam e hoje recebemos mais uma cúmplice, a Mabê, para falar das histórias dela com IA e o surto de ter uma impressora 3D em casa.

    Coffee Sketch Podcast
    193 - Ad Astra

    Coffee Sketch Podcast

    Play Episode Listen Later Feb 11, 2026 61:07


    SummaryIn this engaging conversation, Kurt and Jamie explore a variety of topics ranging from digital practices in architecture, culinary experiences, and the evolution of technology, to reflections on significant historical events like the Challenger disaster. They also delve into the artistic process behind sketching, the cultural commentary found in films like The Fifth Element, and personal experiences related to identity and citizenship. The discussion is rich with humor, insights, and a shared passion for creativity and exploration.TakeawaysKurt shares his temporary basement setup for recording.Jamie discusses a 3D print of an Italian hilltown.The conversation touches on culinary experiences and restaurant recommendations.They reflect on the challenges of learning new software and technology.Kurt emphasizes the importance of practice in mastering skills.Jamie shares insights on the significance of space exploration and historical events.The duo discusses the impact of the Challenger disaster on education and public perception.They explore the artistic process and the meaning behind sketches.The conversation highlights the cultural significance of films like The Fifth Element.Kurt and Jamie reflect on personal experiences related to identity and citizenship.TitlesExploring Digital Practices in ArchitectureCulinary Adventures and Cultural InsightsSound bites"I found the mirror.""You know, it's funny.""Enjoy it."Chapters00:00 Welcome to the Green Room02:32 Exploring Digital Practices and 3D Printing04:48 Culinary Adventures and Cultural Insights07:09 The Green Room Podcast Dynamics08:34 Navigating Technology and Learning11:04 Reflections on Software Evolution13:24 Coffee Conversations and Personal Touches15:57 Sports and Cultural Connections18:23 Sketching and Artistic Expression20:48 Space Exploration and Historical Reflections23:40 The Challenger Disaster and Its Impact26:43 Artistic Inspirations and Aspirations29:36 Cultural References in Film32:27 The Fifth Element: A Cinematic Exploration35:28 Current Events and Social Commentary38:23 Personal Experiences and Identity41:22 Concluding Thoughts and Future DiscussionsSend Feedback :) Support the showBuy some Coffee! Support the Show!https://ko-fi.com/coffeesketchpodcast/shop Our Links Follow Jamie on Instagram - https://www.instagram.com/falloutstudio/ Follow Kurt on Instagram - https://www.instagram.com/kurtneiswender/ Kurt's Practice - https://www.instagram.com/urbancolabarchitecture/ Coffee Sketch on Twitter - https://twitter.com/coffeesketch Jamie on Twitter - https://twitter.com/falloutstudio Kurt on Twitter - https://twitter.com/kurtneiswender

    Hardcore Gaming 101
    Resident Evil: Survivor (and Starstruck: Hands of Time!)

    Hardcore Gaming 101

    Play Episode Listen Later Feb 10, 2026 159:19


    Join the HG101 gang as they discuss and rank an off-rails light-gun shooter/survival-horror hybrid. Then stick around for Starstruck: Hands of Time, a mixed-media rhythm RPG! This weekend's Patreon Bonus Get episode will be RASCAL — a bubble-blowing 3D platformer! Donate at Patreon to get this bonus content and much, much more! Follow the show on Bluesky to get the latest and straightest dope. Check out what games we've already ranked on the Big Damn List, then nominate a game of your own via five-star review on Apple Podcasts! Take a screenshot and show it to us on our Discord server! Intro music by NORM. 2026 © Hardcore Gaming 101, all rights reserved. No portion of this or any other Hardcore Gaming 101 ("HG101") content/data shall be included, referenced, or otherwise used in any model, resource, or collection of data.

    Firearms Radio Network (All Shows)
    We Like Shooting 649 – This is a threat

    Firearms Radio Network (All Shows)

    Play Episode Listen Later Feb 10, 2026


    We Like Shooting - Ep 649 This episode of We Like Shooting is brought to you by: C&G Holsters (Code: WLSISLIFE) Midwest Industries (Code: WLSISLIFE) Gideon Optics (Code: WLSISLIFE) Die Free Co. (Code: WLSISLIFE) Blue Alpha Flatline Fiber Co (Code: WLS15) Bowers Group (Code: WLS) Guests: Bob from Gideon Optics. https://gideonoptics.com/ Text Dear WLS or Reviews +1 743 500 2171  New Public Notes Page: https://dngrsfrdm.com/public/ GEAR CHAT T-Worx Intelligent Rail (Nick) The T-Worx Intelligent Rail is a rail system designed for firearms that integrates smart technology for enhanced accessory management and user interaction. It features embedded sensors and connectivity to provide real-time data on attached devices. This allows for optimized performance in tactical applications through intelligent power distribution and diagnostics. Rozvelt Vektr (Nick) The Rozvelt Vektr is a precision-engineered multi-caliber pistol platform designed for modular adaptability. It features a direct impingement gas system optimized for suppressed shooting and quick barrel swaps. Constructed with high-grade aluminum and steel components, it supports calibers including 9mm, .300 BLK, and 5.56 NATO. Hi-Point and Inland Launch New Affordable Suppressors Hi-Point and Inland Empire Arms have introduced new suppressor models aimed at budget-conscious shooters. These direct-thread suppressors are designed for compatibility with popular calibers like 9mm and .300 Blackout. The release emphasizes affordability and ease of use for entry-level suppressed shooting. Ferro Concepts & Spiritus Systems Unveil Open Standard for Plate Carrier Modularity Ferro Concepts and Spiritus Systems have jointly proposed an open standard to enhance plate carrier modularity, allowing seamless integration of accessories across different manufacturers' systems. The initiative aims to eliminate proprietary barriers, fostering innovation and compatibility in tactical gear. Detailed specifications and collaboration details are outlined in the announcement. BULLET POINTS Armory of Kings FRT90 Forced Reset Trigger for PS90 The FRT90 is a forced reset trigger developed by Armory of Kings specifically for the FN PS90 carbine, showcased at SHOT 2026. It enables rapid semi-automatic fire by mechanically resetting the trigger after each shot. The trigger is designed to comply with current ATF regulations on forced reset mechanisms. Caracal PCCs and Bolt Guns Now Available in the USA Caracal International has announced the availability of their PCCs and bolt-action rifles in the USA through a new distribution partnership. The lineup includes 9mm PCCs and .308 bolt guns designed for reliability and modularity. These firearms are now accessible to American consumers via select retailers. Staccato HD C4X Compensated Pistol The Staccato HD C4X is a new compensated 9mm 1911-style pistol introduced at SHOT 2026, featuring a fully supported match barrel with a C4X compensator integrated into the slide. It incorporates the HD Modular Chassis System for customizable grip modules and enhanced ergonomics. Designed for high-performance shooting with reduced muzzle flip, it maintains compatibility with Staccato's optics-ready platform. Irregular Design Group Suppressors Irregular Design Group offers suppressors designed for optimal performance in field applications. The article from Guns.com dated February 5, 2026, highlights their innovative suppressor lineup. Specific models and detailed specs are featured for technical evaluation. Vickers Tactical Slide Racker for Gen3/Gen5 Large Caliber Glock Models The Vickers Tactical Slide Racker is designed for Gen3 and Gen5 large caliber Glock models, including 10mm, .40 S&W, .45 ACP, and .45 Super. It features a large, textured aluminum lever that attaches to the rear of the slide for enhanced racking leverage. Made in the USA, it aids users with limited hand strength or those wearing gloves by providing extra purchase on the serrations. Laser Engravers for ATF Form 1 Compliance on Firearms and Suppressors The article discusses using affordable diode laser engravers to mark firearms, suppressors, and other NFA items for ATF Form 1 approval, replacing traditional engraving methods. Recommended models include the xTool D1 Pro (10W and 20W) and Ortur Laser Master 3, which offer sufficient power for engraving on metals like aluminum and titanium with proper preparation. Key steps involve surface cleaning, applying marking spray, and using software like LightBurn for precise, legible markings meeting ATF depth and legibility standards. Springfield Armory's Blued SA-35: 10.8 Performance 1911 Masterclass at SHOT Springfield Armory unveiled the blued SA-35 at SHOT Show, blending classic 1911 design with high-performance features for superior accuracy and reliability. This limited-edition pistol showcases a 10.8-inch sight radius and match-grade barrel, optimized for precision shooting. It's positioned as a premium tribute to the iconic SA-35 lineage with modern enhancements. Beretta A300 Ultima Patrol: 20-Gauge Tactical Shotgun Review The Beretta A300 Ultima Patrol in 20-gauge is designed for home defense and patrol duties, featuring a durable synthetic stock and oversized controls for reliability in high-stress situations. It boasts Beretta's renowned gas-operated system with improved piston and recoil spring for reduced wear and faster cycling. This model emphasizes tactical ergonomics with a 19.1-inch barrel and Picatinny rail for optics. GUN FIGHTS No one stepped into the arena this week. WLS IS LIFESTYLE GunWashington X Post on Firearms Culture Not Stated. The provided input is a URL to an X (Twitter) post, but no page content or text was retrieved or provided for analysis. Unable to extract technical details on firearms culture. GOING BALLISTIC Maryland House Judiciary Committee to Hear HB 874 Handgun Ban Bill The Maryland House Judiciary Committee is scheduled to hear House Bill 874 on February 12, 2025, which seeks to ban the manufacture, sale, and possession of certain semiautomatic handguns classified as ‘assault pistols.' The bill targets specific models like the Beretta 92X Performance, CZ P-10C, Glock 19, Sig Sauer P320, and Smith & Wesson M&P 2.0, among others listed in proposed Criminal Law Article § 4-302. NRA-ILA urges opposition to the bill, viewing it as an infringement on Second Amendment rights. California AG Sues Gatalog Over 3D-Printed Gun CAD Files Distribution California Attorney General Rob Bonta filed a lawsuit against Gatalog LLC and its operator, Len Patterson, for allegedly distributing CAD files for 3D-printing unserialized firearms, violating state ghost gun laws. The suit claims Gatalog's website enabled the production of undetectable and untraceable guns by providing over 644 firearm designs. It seeks to halt the distribution and impose civil penalties under California's assault weapons and unsafe handgun laws. New Mexico House Bill 82: Democrats Advance Broadest Gun Ban in US New Mexico House Democrats are poised to pass House Bill 82 this week, which would ban dozens of semi-automatic firearms including AR-15s, AK-47s, and many handguns. The bill targets firearms with detachable magazines and specific features like pistol grips or folding stocks. It has advanced through committee and is scheduled for a House floor vote. Gun Owners of America Action Alert: Oppose S. 407 Anti-Gun Bill (February 3, 2026) Gun Owners of America urges members to contact Senators to oppose S. 407, a bill introduced by Sen. Dick Durbin (D-IL) that would ban commonly owned semi-automatic firearms, including AR-15s and similar rifles. The legislation targets firearms with pistol grips, folding stocks, and other standard features, classifying them as ‘assault weapons.' It also bans magazines over 10 rounds and imposes restrictions on private transfers. Ammoland Article: Committed Gun Grabbers Claim to Support the Second Amendment (February 2026) The article criticizes politicians and groups labeled as ‘gun grabbers' who publicly claim support for the Second Amendment while advocating restrictive gun control measures. It highlights inconsistencies in their rhetoric and actions, portraying them as undermining constitutional rights. Examples include statements from figures like Joe Biden and organizations such as Everytown for Gun Safety. DOJ Amicus Brief in Support of Challenge to Massachusetts Handgun Roster (Savage) The U.S. Department of Justice filed an amicus curiae brief in a federal lawsuit challenging Massachusetts' handgun roster law, arguing that the Attorney General's authority to ban handguns lacking arbitrary safety features violates the Second Amendment. The brief, submitted in the case Reese v. Department of Revenue, contends that the roster effectively prohibits most modern handguns by imposing subjective loaded chamber indicator and magazine disconnect requirements not justified by public safety data. It cites post-Bruen precedents to assert that Massachusetts' scheme fails constitutional scrutiny. Oregon Democrats Propose Two-Year Delay for Permit-to-Purchase Law (HB 2005) (Savage) Oregon Democrats are advancing a proposal to delay the implementation of the state's new permit-to-purchase handgun law, HB 2005, from its original August 2026 start date to August 2028. The delay addresses concerns over the Oregon State Police's readiness to process the required background checks and issue permits. This comes amid ongoing legal challenges to the law, which mandates a safety course, background check, and references for handgun purchases. New Mexico House Bill 129 – Proposed Broadest Gun Ban in US (Savage) New Mexico Democrats are advancing House Bill 129, which would ban a wide array of semi-automatic firearms including AR-15s, AK-47s, and many handguns.

    WHEN THE HUNT CALLS
    NYCBP EP.36 - Talking T.A.C.

    WHEN THE HUNT CALLS

    Play Episode Listen Later Feb 10, 2026 47:20


    Ever register for a Total Archery Challenge event? It's probably THE most popular 3D archery event out there. Devian and Cliff never registered for one...until recently. Join them both as they discuss how registration for the Seven Springs T.A.C. went.   - - - - - - - - - - - - - - - - - - - - DON'T FORGET: For a 15% discount on SKRE Gear, use code NYC - - - - - - - - - - - - - - - - - - - - Follow the NYC Bowhunting Podcast, Cliff, and Devian on Instagram: NYCBP: @nycbowhuntingpod Cliff: @urbanarcherynyc Devian: @citykidbushcraft

    Midlife Pilot Podcast
    EP167 - Cockpit Tetris and Mess Management

    Midlife Pilot Podcast

    Play Episode Listen Later Feb 10, 2026 66:48


    Episode 167 tackles the challenge of cockpit organization with real solutions from three very different flying setups.Brian's still battling the conspiracy to prevent his instrument checkride (spoiler: UPS trucks and snow banks are involved), while Ted shares wisdom from his Miata-sized cockpit about 3D-printed organizers, strategic cup holder placement, and why everything needs "ONE home." Ben discovers that big game hunting expos are surprisingly good aviation networking venues.From Pivot cases and pulse oximeters to the life-or-death importance of proper seatbelt clipping, plus listener feedback that sparks a deeper discussion about test scores and aviation's competitive culture. Because when you're hand-flying an approach in the clouds and ATC changes your clearance, you need to be flying the airplane, not managing your mess.Features the guys' real-world tips for everything from "hard-wired" Stratus installations to keeping cash hidden in baggage organizers for emergency out-calls. Plus: why Ted went from zero flight hours to helping rewrite aviation regulations in under five years.Mentioned on the show:* Dallas Safari Club: https://www.biggame.org/convention/* Georgia World Congress Center: https://en.wikipedia.org/wiki/Georgia_World_Congress_Center* Nicholas Air: https://www.nicholasair.com/* ASTM F37 committee: https://www.astm.org/membership-participation/technical-committees/committee-f37* DPE and educator Seth Lake: https://vsl.aero/* Seth's ACE guide: https://vslaviation.myshopify.com/* TL Sparker: https://tlsportaircraft.com/sparker/* BRS Parachutes: https://brsaerospace.com/* Ted's "purse", holds the iPad Mini: https://www.amazon.com/dp/B07FZL4TZP* Pivot case: https://pivotcase.com/products/a35a* Calm Cockpit Podcast: https://calmcockpit.com/* Flyfisherman Lefty Kreh: The Greatest of All Time: https://www.flyfisherman.com/editorial/lefty-greatest-of-all-time/516098Website: https://midlifepilotpodcast.comPatreon: https://patreon.com/midlifepilotpodcastLeave us a 5-star review and we'll read it on the air!

    management 3d mess ups tetris cockpit atc ipad mini miata stratus dpe georgia world congress center dallas safari club
    Upmarket: The Business of Real Estate Photography & Media
    Ep. 112 - What Realtors Really Want From AI

    Upmarket: The Business of Real Estate Photography & Media

    Play Episode Listen Later Feb 10, 2026 65:24


    AI is everywhere in real estate right now, but most of the industry is talking about tools instead of listening to agents. In this episode, Reed and special guest Sam Benner break down what Realtors actually want from AI, from real efficiency gains to the fears around losing authenticity and trust. We unpack how agent priorities have shifted, where they're really spending money, and why marketing and tech fatigue is very real in 2026. From regulation and compliance to video, portals, and brand consistency, this is a grounded look at what actually matters to working agents. This is an honest conversation about cutting through hype and focusing on what truly helps Realtors grow their business.Don't worry, they still end the show with their Action Items... things that any listener can do right now to help lay the foundation for scaling their Real Estate Media Business.Follow the pod on Instagram at @upmarketpod.If you would like to access Sam Benner's Action Plan PDF, give us a follow and send a DM to @upmarketpod. The Presenting Sponsor of Upmarket is Fotello, an AI media platform built to snap, upload, and deliver. Pricing starts at $12 per listing, with human revisions available within six hours. To get started, visit https://fotello.co/?via=upmarket and subscribe to begin using the platform. If you do not use the link, enter the code UPMARKET during signup.Another amazing sponsor is iGUIDE, which helps real estate professionals capture spaces fast and with industry-leading accuracy. Their PLANIX Pro camera delivers trusted measurements, with no subscriptions and priced per project. Options like iGUIDE Instant provide a clean 3D tour and interactive floor plan in minutes, starting at $7.99. Learn more at goiguide.com or @go_iguide.Another sponsor is HDPhotoHub, the all-in-one platform for ordering, scheduling, and delivering complete marketing kits, from video reels to print. With pay-per-listing pricing, transparent terms, and industry-leading integrations, HDPhotoHub helps you build the workflow you actually want. Visit HDPhotoHub.com and use code Upmarket to get your first 15 full deliveries free.Another amazing sponsor of Upmarket is SecondFloor, the fastest way to create a finished floor plan. It's so fast that you can deliver the finished floor plan while you are still on-site! Not only that, but you can get UNLIMITED floor plans for one low monthly fee. We love SecondFloor and you can go to secondfloorapp.com/upmarket and any new subscriber will get a one-month free trial.Our Action Items are sponsored by PixlCRM, where you can scale your real estate photography business through automation. It's an all-in-one business and marketing platform that complements your current delivery app. If you go to pixlcrm.com/upmarket you can get a 30-day risk-free trial!

    This Week in XR Podcast
    America Is Racing Toward An AI Cliff With No Safety Net, Will AGI Hurt Or Harm? - Alvin Wang Graylin

    This Week in XR Podcast

    Play Episode Listen Later Feb 10, 2026 49:23


    Our guest this week, Alvin Wang Graylin spent 35 years in senior leadership roles across HTC, IBM, and other major tech companies. He ran HTC's VR division, came out of the famous HIT Lab, now teaches at MIT, holds a fellowship at Stanford, and just published a paper called "Beyond Rivalry" proposing a seven-point plan for deescalating US-China AI tensions and building a global safety net before the economy breaks. His thesis: America is the fastest in the AI race and the least prepared for what it's creating—a cliff where human labor theory of value collapses, capital concentration accelerates, and 40% of the population living month to month faces chaos.The conversation becomes a wide-ranging debate between Alvin, Charlie, and Rony about whether AGI will be benevolent by default (Alvin's position: research shows smarter AI seeks global coherence and becomes less controllable by individual humans, which may actually make it safer) or whether benevolence must be designed in from scratchAI XR News You Should Know: Elon Musk merges SpaceX, xAI, and X into a single entity—Alvin dismantles the space data center concept with physics (vacuum cooling is a myth, micro-meteorite collisions would destroy hardware daily, and energy is only 10% of data center costs). Amazon invests $50 billion in OpenAI that round-trips back to AWS. Alphabet breaks revenue records at $400 billion but spooks investors by disclosing $90 billion in AI spending. ElevenLabs raises $500 million at $11 billion valuation. Rony's SynthBee hits unicorn status with $100 million raised at a multi-billion dollar valuation. Alvin warns the AI bubble dwarfs the dot-com era (298 companies raised $24 billion total during dot-com; OpenAI alone is raising that in a single private round) and predicts OpenAI may implode before going public.Key Moments Timestamps:[00:04:47] SpaceX/xAI/X merger: Rony calls it Elon's "return to Tony Stark form"[00:06:41] Alvin dismantles space data centers with physics: vacuum cooling myth, micro-meteorites, $7K/kg launch costs[00:10:04] Amazon's $50B investment in OpenAI as a round-trip to AWS; the scam economy[00:11:26] Alvin predicts OpenAI may implode before going public[00:14:23] Alvin on 35 years in AI: the technology is transformational but everyone's making a commodity product[00:17:04] The AI bubble dwarfs dot-com: $24B total vs. single private rounds today[00:19:04] Rony's contrarian: the $110 trillion global economy is what's being bet against[00:21:06] Labor theory of value collapses: what happens when humans exit the production cycle[00:23:00] America is fastest in the AI race and least prepared; 40% live month to month[00:24:00] Alvin's Stanford paper "Beyond Rivalry": a CERN for AI and global data pool[00:28:00] Davos reflections: the rest of the world is more rational than America[00:34:00] Chinese vs. American culture: reverence for teachers, respect for elders[00:42:00] Alvin's "Abundant" framework: valuing human dignity over production after AGI[00:44:22] The great debate: will AGI find benevolence naturally (Alvin) or must it be designed in (Rony)?[00:47:00] Rony on risk: AGI systems are unverifiable, untestable, and we cannot take the chanceListen to the full episode and subscribe to the AI XR Podcast for weekly conversations at the intersection of AI, XR, and the future of humanity.This episode is brought to you by Zappar, creators of Mattercraft—the leading visual development environment for building immersive 3D web experiences for mobile headsets and desktop. Build smarter at mattercraft.io.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    The Sustainable Ministry Show
    The AI-Powered Church: How to Steward the Tool of Our Time

    The Sustainable Ministry Show

    Play Episode Listen Later Feb 10, 2026 46:27


    Guest: Anthony Hunt, Next Gen Pastor & Author of The AI-Powered Church From performing as a professional mascot for over 20 years to leading in Next Gen ministry, Anthony Hunt brings a unique energy to the conversation about the future of the church. In this episode, Anthony breaks down the walls of fear surrounding Artificial Intelligence, arguing that when used correctly, AI isn't just a shortcut, it's a partner that can help us reclaim our time for what matters most: discipleship and people. If you are a ministry leader feeling burnt out, a tech skeptic worried about the "soulless" nature of digital tools, or a creative looking to amplify your impact, this episode offers a theological and practical framework for moving forward.

    The Engineering Leadership Podcast
    Why founders should invest in coaching, communication & leadership mechanisms before you scale w/ James Birchler #248

    The Engineering Leadership Podcast

    Play Episode Listen Later Feb 10, 2026 50:46


    Founders often delay leadership coaching until a major crisis hits, leading to significant costs in productivity, team churn, and poor decisions. In this episode, James Birchler (Technical Advisor & Executive Leadership Coach) argues that early coaching is a game-changer for a startup's success. We explore the hidden costs of waiting and the benefits of intentionally installing leadership and communication systems before you scale. James shares specific self-awareness mechanisms, like advisory groups and feedback loops, to help founders design their day and create accountability. You'll also learn practical strategies like the "5-Minute Alignment Loop" for spotting communication breakdowns & for reinforcing clarity. Plus insights on how to "install your leadership OS" so it can scale with your company. ABOUT JAMES BIRCHLERJames Birchler is an executive leadership coach and technical advisor who specializes in helping engineering leaders and founders develop greater self-awareness and build high-performing teams. He combines deep technical expertise with practical leadership development, making him particularly valuable for technical leaders scaling their organizations.As both a founder and engineering leader, James has more than 20 years of experience leading teams at companies ranging from early-stage startups to Amazon, where his current role is Technical Advisor to the VP of Amazon Delivery Routing and Planning. Most recently, he founded NICER, a premium natural personal care company, and Actuate Partners, his executive coaching and technical advisory practice. He also held VP of Engineering roles at companies including Caffeine (backed by Greylock and Andreessen Horowitz), SmugMug (where his team acquired Flickr), and IMVU.At IMVU, James implemented the Lean Startup methodologies alongside Eric Ries, author of The Lean Startup and creator of the methodology, literally the first company to apply these principles. His team helped pioneer the DevOps movement by building infrastructure to ship code to production 50 times per day and coining the term "continuous deployment." This experience in systematic experimentation and continuous improvement now informs his coaching approach through frameworks like CAMS (Coaching, Advising, Mentoring, Supporting) and the Think-Do-Learn Loop.James completed his executive coaching certification at UC Berkeley Haas School of Business Executive Coaching Institute. His coaching practice focuses on self-awareness, integrity, accountability, and fostering growth mindsets that support continuous learning and high performance. He writes the Continuous Growth newsletter and offers both individual executive coaching and peer learning circles for technical leaders.Through his advisory work with growth-stage startups in the US and Europe, James helps leaders navigate common scaling challenges including hiring and interviewing, implementing development methodologies, establishing operational cadences, and developing other leaders. His approach treats leadership development like product development—with systematic feedback loops, measurable outcomes, and continuous improvement.You can find James at jamesbirchler.com, LinkedIn, and Substack. This episode is brought to you by Retool!What happens when your team can't keep up with internal tool requests? Teams start building their own, Shadow IT spreads across the org, and six months later you're untangling the mess…Retool gives teams a better way: governed, secure, and no cleanup required.Retool is the leading enterprise AppGen platform, powering how the world's most innovative companies build the tools that run their business. Over 10,000 organizations including Amazon, Stripe, Adobe, Brex, and Orangetheory Fitness use the platform to safely harness AI and their enterprise data to create governed, production-ready apps.Learn more at Retool.com/elc SHOW NOTES:Why founders should seek coaching earlier rather than waiting for a crisis to occur (2:45)The high stakes of ignoring this critical advice & how this leads to communication & scaling problems (4:50)The importance of effective communication channels & leadership mechanisms before pressure increases (6:12)How investing a small amount in coaching early on can prevent hundreds of thousands of dollars in future costs (8:07)Frameworks for cultivating self-awareness / leadership blind spots (11:06)James's practice of "designing your day" around a desired identity, not just a list of tasks (12:30)Why designing your day is about intentionality (15:13)How this practice leads to better relationships & opportunities to reflect (17:44)Reflective listening & its impact on customer relationships (19:32)Strategies for improving self-awareness / uncovering blind spots (22:05)An example of how awareness can lead to better results  (26:03)Day-to-day rituals for improving self-awareness (28:14)Signals that your communication methods are effective & getting through (30:37)Reflect on & define the desired outcome you want to generate (33:26)The five-minute alignment loop for creating clarity & confirming ownership as a leader (35:21)Why creating clarity & finding alignment is key as a founder (37:02)How the same communication & leadership patterns recur as your org scales, from small startup to large enterprise (39:46)The increasing importance of human skills like emotional intelligence and reflective listening in an age of AI (42:03)Rapid fire questions (44:38)This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Dental Sound Bites
    New Radiography Recommendations

    Dental Sound Bites

    Play Episode Listen Later Feb 10, 2026 37:11


    A clinical conversation about the updated recommendations to enhance radiography safety in dentistry.  Special Guest: Dr. Erika Benavides  For more information, show notes and transcripts visit https://www.ada.org/podcast   Show Notes In this episode, we are having a clinical conversation about the updated recommendations to enhance radiography safety in dentistry.   We explore the major changes from previous guidelines, the rationale behind discontinuing patient shielding, the importance of patient‑centered imaging, and practical implications for dentists and academics.   Our guest is Dr. Erika Benavides, a Clinical Professor and Associate Chair of the Division of Oral Medicine, Oral Pathology and Radiology, and the Director of the CBCT Service at the University of Michigan, School of Dentistry. She is a Diplomate and Past President of the American Board of Oral and Maxillofacial Radiology (ABOMR). She also served as Councilor for Communications of the American Academy of Oral and Maxillofacial Radiology and Chair of the Research and Technology Committee. Dr. Benavides is a Fellow of the American College of Dentists and has published multiple peer-reviewed manuscripts in the multidisciplinary aspects of diagnostic imaging. She has been a co-investigator in NIH funded grants for the past 10 years and recently served as the Chair of the expert panel to update the 2012 ADA/FDA recommendations for dental radiography. Her clinical practice is dedicated to interpretation of 2D and 3D dentomaxillofacial imaging.    The two-part recommendations were updated by an expert panel which included radiologists, general and pediatric dentists, a public health specialist, and consultants from nearly every dental specialty.  Dr. Benavides shares some of the main takeaways and new updates is that that lead aprons and radiation collars are no longer recommended. This recommendation includes all dental maxillofacial imaging procedures and applies to most patients.  Also, a recommendation to avoid routine or convenience imaging, and focus instead of patient-centered imaging, based on the patients' specific needs. And, when possible, previous radiographs should be obtained.   Dr. Benavides shares that imaging must be patient‑specific, not protocol-driven, and encourages dentists to ask the following questions before dental imaging: "Do we need this additional information? Is this additional information going to change my diagnosis, or it's going to contribute to the diagnosis and treatment planning?"   The group discusses some of the possible challenges, and opportunities, to implement these new recommendations.    Resources:    This episode is brought to you by Dr. Jen Oral Care. Learn more about Dr. Jen.  Read the full clinical recommendations American Dental Association and American Academy of Oral and Maxillofacial Radiology patient selection for dental radiography and cone-beam computed tomography  Find more ADA resources on X-Rays and Radiographs.  Stay connected with the ADA on social media! Follow us on Facebook, Instagram, LinkedIn, and TikTok for the latest industry news, member perks and conversations shaping dentistry.  

    Adafruit Industries
    Valentines Fidget

    Adafruit Industries

    Play Episode Listen Later Feb 10, 2026 1:00


    Every week we'll 3D print designs from the community and showcase slicer settings, use cases and of course, Time-lapses! This Week: Valentines Fidget By StitchKing_4339675 https://www.printables.com/model/1584195-valentines-3-in-1-multi-fidget Bambu X1C PolyMaker PLA 0hr 58mins X:67 Y:60 Z:12mm .2mm layer / .4mm Nozzle 10% Infill / 1mm Retraction 200C / 60C 20g 230mm/s ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Adafruit on Instagram: https://www.instagram.com/adafruit Shop for parts to build your own DIY projects http://adafru.it/3dprinting 3D Printing Projects Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOWD2dJNRIN46uhMCWvNOlbG 3D Hangout Show Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOVgpmWevin2slopw_A3-A8Y Layer by Layer CAD Tutorials Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOVsMp6nKnpjsXSQ45nxfORb Timelapse Tuesday Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOVagy3CktXsAAs4b153xpp_ Connect with Noe and Pedro on Social Media: Noe's Twitter / Instagram: @ecken Pedro's Twitter / Instagram: @videopixil ----------------------------------------- Visit the Adafruit shop online - http://www.adafruit.com/?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting Subscribe to Adafruit on YouTube: http://adafru.it/subscribe Adafruit Monthly Deals & FREE Specials https://www.adafruit.com/free?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting Join our weekly Show & Tell on G+ Hangouts On Air: http://adafru.it/showtell Watch our latest project videos: http://adafru.it/latest?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting 3DThursday Posts: https://blog.adafruit.com/category/3d-printing?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting New tutorials on the Adafruit Learning System: http://learn.adafruit.com/?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting Music by Dan Q https://soundcloud.com/adafruit -----------------------------------------

    time valentines day 3d diy fidget adafruit g hangouts on air adafruit learning system layer cad tutorials playlist
    Vision ProFiles
    Retrocade arcade for AVP

    Vision ProFiles

    Play Episode Listen Later Feb 10, 2026 54:30


    Eric, Dave, and Marty talk about Retrocade and other great storiesBETA visionOS26.3 RChttps://developer.apple.com/documentation/visionos-release-notes/visionos-26_3-release-notes NEWSPatentsApple Patent Reveals Split‑Cadence Eye Tracking for Low‑Power AR Glasseshttps://x.com/PatentlyApple/status/2019735935169315055Apple Reinvents Eye Tracking With Laser‑Based SMI Technologyhttps://x.com/PatentlyApple/status/2018780751077196106RetrocadeResolution Games launches Apple Vision Pro game Retrocade on Apple Arcade | interviewhttps://gamesbeat.com/resolution-games-launches-apple-vision-pro-game-retrocade-on-apple-arcade-interview/How to turn an Apple Vision Pro into a retro videogame arcadehttps://www.ped30.com/2026/02/07/apple-vision-pro-retrocade/ Retrocade turns the Vision Pro into a convincing virtual arcadehttps://www.multicore.blog/p/retrocade-turns-the-vision-pro-into Apple's Vision Pro Could Not Provide Me the (Fake) Arcade of My Dreamshttps://gizmodo.com/apples-vision-pro-could-not-provide-me-the-fake-arcade-of-my-dreams-2000717554Retrocade Transforms Apple Vision Pro into a Living 80s Arcadehttps://www.techeblog.com/retrocade-apple-vision-pro-app-80s-arcade/Lakers Game game#2 experience? https://lakersnation.com/apple-immersive-lakers-courtside-spectrum-front-row-vision-pro/Application - ArcheologyThe Apple Vision Pro: Useful Mixed/Augmented Reality (MR/AR) Headset for Archaeology or Not Quite There Yet?https://www.cambridge.org/core/journals/advances-in-archaeological-practice/article/apple-vision-pro-useful-mixedaugmented-reality-mrar-headset-for-archaeology-or-not-quite-there-yet/250167C7C28A7CC4D6A1CBA1FFA30F50PSVR2 ControllersIs Apple no longer selling the PlayStation VR2 Sense controller for the Vision Pro?https://appleworld.today/2026/02/is-apple-no-longer-selling-the-playstation-vr2-sense-controller-for-the-vision-pro/Glasses AgainNext-gen Vision Pro: Is the future of Apple's visionOS tech actually smart glasses?https://www.stuff.tv/features/next-gen-apple-vision-glasses/Etsy EyecoversFor $8, This Changes Everythinghttps://www.reddit.com/r/AppleVisionPro/comments/1qyu1w9/for_8_this_changes_everything/Link to Etsy produced covershttps://www.etsy.com/listing/4439299259/anti-sleep-cap-for-apple-vision-pro-3d Geography appI built a 3D geography app for Vision Pro – almost nobody uses it. What am I doing wrong?https://www.reddit.com/r/AppleVisionPro/comments/1qzdq4w/i_built_a_3d_geography_app_for_vision_pro_almost/ Yacko Sings the Countries (for Eric)https://www.youtube.com/watch?v=V1508wboZXk Waddle newsWhat are your thoughts about this, am I thinking wrong?https://www.reddit.com/r/VisionPro/comments/1qzeazr/what_are_your_thoughts_about_this_am_i_thinkingSmash BrosI didn't expect this to work...https://www.youtube.com/watch?v=50K_Z7lTGzwSmart GlassesI wore the world's first HDR10 smart glasses, and they can easily replace my living room TVhttps://www.zdnet.com/article/rayneo-air-4-pro-hdr10-smart-glasses-ces/APPSRetrocadehttps://apps.apple.com/us/app/retrocade/id6746784702Hand Physics Lab 10$https://apps.apple.com/us/app/hand-physics-lab/id6752609486Website: ThePodTalk.NetEmail: ThePodTalkNetwork@gmail.com 

    College Sports Now
    Dugouts, Dumbbells and Dingers - The North Carolina Baseball TAKEOVER | February 10, 2026

    College Sports Now

    Play Episode Listen Later Feb 10, 2026 52:30


    Happy Opening Day 2026! As they've done the last two seasons, the #3D crew has hit the road to feature a Top 25 team as they prepare for the start of the season! This year, it's the #11 North Carolina Tar Heels in the spotlight, as host David Kahn sits down for 3 exclusive interviews with head coach Scott Forbes and team captains Gavin Gallaher and Matthew Matthijs. The 3 UNC representatives get into what they're bringing from last year's Super Regional loss to Arizona into 2026, the philosophies of this team, leadership qualities they've embraced and more. Coach Forbes also recounts meeting Nick Saban at College Gameday and discusses how this new group came together, while Matthijs gives an in-depth breakdown of his recovery from a UCL injury that cut his 2025 season short, and Gallaher describes his evolution as a player and role model over the last 2 seasons, as well as the growth of his fabulous mustache. Dugouts, Dumbbells and Dingers is sponsored by Homefield Apparel. They provide quality, thoughtful apparel for more than 190 colleges and universities across the coutry. Be sure to visit homefieldapparel.com for the best college baseball team gear, including the North Carolina Tar Heels, you can find!3D is also in partnership with Backyard Baseball Bros, the creators of the Borgoball. Check out backyardbaseballbros.com for the various editions of the Borgoball on sale now! We're also glad to be working with Baseball BBQ. Use the code "3D-20" at checkout for 20% off your order at baseballbbq.com and get yourself the best college-branded grilling tools and apparel as the warm weather approaches and baseball season rolls on!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Insight Myanmar
    Nothing To Lose But Exploitation

    Insight Myanmar

    Play Episode Listen Later Feb 10, 2026 77:17


    Episode #483: “I particularly look from Marxist feminist perspectives,” says Ma Cheria, a Myanmar-born researcher now living in exile in Chiang Mai. Her work examines how capitalism and patriarchy combine to exploit Burmese migrant women in Thailand's informal economy. Before the 2021 military coup, she was a social worker involved in peace and gender programs and helped lead anti-coup strikes. After comrades were arrested, she fled to Thailand, continuing the struggle through research and activism. Cheria's studies reveal that over five million Myanmar migrants now live in Thailand, nearly two million without documents. Many work in “3D jobs”—dirty, dangerous, and demeaning—that Thai citizens refuse to do. Though formal factories must pay the minimum wage, most women end up in unregistered home-based factories where they can bring children and work flexible hours, but earn half the legal rate and lack safety or legal protection. “Workers know it is very unfair, but they cannot complain because they are undocumented,” she explains. Cheria traces these abuses to a malfunctioning migration system that forces workers to depend on brokers who extort money or seize passports. She links today's exile economy to Myanmar's crushed labor movement: once progressive and female-led, it was outlawed after the coup. In Thailand, migrants are legally allowed to join Thai-run unions but not to form their own—an empty right in border towns with no Thai workers. Her Marxist-feminist analysis highlights women's “double exploitation”: wage labor in factories and unpaid domestic labor at home. “In the revolution, we have to abolish both systems together,” she says of capitalism and patriarchy. From exile she teaches feminist and labor theory to ethnic women's groups online, believing that change grows through shared reflection. Despite repression and growing anti-migrant hostility, she documents quiet resilience in Burmese-run schools and clinics. Her message is clear: solidarity across borders is essential because “only a small group benefits, while the majority—the working class—remains unseen.

    Girls That Manifest
    How I Manifested Passive Sales of a "Dead" Product

    Girls That Manifest

    Play Episode Listen Later Feb 10, 2026 23:29


    Wanna manifest more business sales without more marketing? I ran an experiment on selling a "dead" product without lifting a finger. Here's how..In this episode you'll learn:How I manifested sales on a neglected product without sending a single email or linkMoving from asking for sales to telling your reality what is happeningWhy detachment and "ignoring" your business is often the fastest bridge to a resultDenying your senses and stay disciplined when the 3D inbox looks emptyThe exact deconstruction of the 24-hour protocol I use to collapse timeRESOURCES:Manifest Sales in 24 Hours (pre-sale)Saturate That Sh*t 28-Day ChallengeFREE COURSE: Manifest $1k in 30 DaysFREE LIBRARY: The Abundance Vault

    The Film Board by The Next Reel Film Podcasts
    Mercy: The Algorithm Wants Your Lunch Money

    The Film Board by The Next Reel Film Podcasts

    Play Episode Listen Later Feb 10, 2026 61:52


    This month on The Film Board, Pete Wright drags Andy Nelson, Tommy Metz III, and Steve Sarmento into an emergency bonus hearing because Andy texted, essentially, “We can't skip a month. Also I found a movie.” That movie is Mercy, a slick, noisy, deeply committed screenlife thriller where Chris Pratt wakes up strapped into a futuristic execution-chair-courtroom and has 90 minutes to prove he didn't kill his wife. The judge is an AI who looks like Rebecca Ferguson. Which is frankly unfair to every other AI.From there, it's a full-spoilers sprint through a world where justice is software, surveillance is just “normal life,” and every single camera on Earth is apparently pointed at exactly the wrong moment. The panel fights over what Mercy thinks it's doing (a cautionary tale about AI and institutions) versus what it actually does (a pulpy, coincidence-powered ride that occasionally forgets its own premise and wanders off toward terrorism and explosions).Andy is… not having it. Steve is torn in the way only a lover of scrappy sci-fi concepts can be: “It's messy, but I'm intrigued.” Tommy—who walked in expecting bargain-bin January nonsense—ends up delighted, especially after an accidental 3D screening turns the whole thing into a theme-park attraction where the chair is the main character. Pete tries to keep the court metaphor alive long enough to pronounce a verdict, but keeps getting distracted by the movie's most dangerous idea: not the AI, but the assumption that the only way to get “justice” is if the system can see literally everything.Also: yes, we talk about the wind. The screens have wind.Watch & DiscoverWatch Now: Apple TV | Amazon | LetterboxdOriginal Theatrical Trailer Support The Next Reel Family of Film Shows:Become a member for just $5/month or $55/yearJoin our Discord community of movie loversThe Next Reel Family of Film Shows:Cinema Scope: Bridging Genres, Subgenres, and MovementsThe Film BoardMovies We LikeThe Next ReelSitting in the DarkConnect With Us:Main Site: WebMovie Platforms: Letterboxd | FlickchartSocial Media: Facebook | Instagram | Threads | Bluesky | YouTube | PinterestYour Hosts: Pete | JJ | Steve | Tommy | Andy | Ocean Shop & Stream:Merch Store: Apparel, stickers, mugs & moreWatch Page: Buy/rent films we've discussedOriginals: Source material from our episodesSpecial offers: Audible

    Ready 4 Pushback
    Ep. 321 Bridging the College to Cockpit Interview Gap with Top Rudder Consulting

    Ready 4 Pushback

    Play Episode Listen Later Feb 9, 2026 28:29


    Nik welcomes back Travis Koch, founder of Top Rudder Consulting, to discuss the critical role of interview preparation for college students pursuing aviation careers. Travis shares insights on the unique experience challenges students face as they transition from college to the job market and explains how Top Rudder aims to fill the experience gap. Furthermore, Nik and Travis explore why interview skills, effective communication, and authentic self-presentation are just as important as hands-on technical experience.  Hear actionable advice for young professionals and discover how Top Rudder helps students elevate their career prospects. Plus, find out how Top Rudder Consulting can help you and your interview journey by contacting toprudderinfo@gmail.com.  CONNECT WITH US Are you ready to take your preparation to the next level? Don't wait until it's too late. Use the promo code "R4P2026" and save 10% on all our services. Check us out at www.spitfireelite.com! If you want to recommend someone to guest on the show, email Nik at podcast@spitfireelite.com, and if you need a professional pilot resume, go to www.spitfireelite.com/podcast/ for FREE templates!  SPONSOR Are you a pilot just coming out of the military and looking for the perfect second home for your family? Look no further! Reach out to Marty and his team by visiting www.tridenthomeloans.com to get the best VA loans available anywhere in the US.  Be ready for takeoff anytime with 3D-stretch, stain-repellent, and wrinkle-free aviation uniforms by Flight Uniforms. Just go to www.flightuniform.com and type the code SPITFIREPOD20 to get a special 20% discount on your first order.  #Aviation #AviationCareers #aviationcrew #AviationJobs #AviationLeadership #AviationEducation #AviationOpportunities #AviationPodcast #AirlinePilot #AirlineJobs #AirlineInterviewPrep #flying #flyingtips #PilotDevelopment #PilotFinance #pilotcareer #pilottips #pilotcareertips #PilotExperience #pilotcaptain #PilotTraining #PilotSuccess #pilotpodcast #PilotPreparation #Pilotrecruitment #flightschool #aviationschool #pilotcareer #pilotlife #pilot

    Inform Performance
    Accelerate - Emma Meehan: Technical Founder in a Clinical World

    Inform Performance

    Play Episode Listen Later Feb 9, 2026 46:54


    Episode 210: In this episode of Accelerate, host Nicola Graham is joined by Emma Meehan — Founder, CEO, and CTO of KinetikIQ. Emma is building technology that sits at the intersection of biomechanics, machine learning, and real-world performance. KinetikIQ turns any smartphone into a full-body 3D biomechanics system using LiDAR and AI — no wearables required — making advanced movement analysis far more accessible across sport and health. With a background in computer science and software engineering, alongside experience as a competitive weightlifter, Emma brings both technical depth and practitioner perspective to product development. Her work has already been recognised across sport, technology, and business — including wins at the KPMG Global Tech Innovator Ireland and the Barca Innovation Challenge, Best New Sports Business of the Year at the Irish Sport Awards, recognition from SportsTechX as a European startup to watch, and features in the Sunday Business Post and Irish Independent 30 Under 30 lists. Together, Nicola and Emma explore what it really takes to build a company as a technical founder, how the Irish startup ecosystem can support early-stage growth, and the realities of securing venture capital in sport and healthtech — alongside the lived experience of building as a female founder in a still-emerging industry. Topics discussed: Building a company as a technical founder The role of the Irish startup ecosystem in early growth Venture capital funding in sport and healthtech The realities of being a female founder in sports technology Where you can find Emma: LinkedIn Instagram KineticIQ - Sponsors Gameplan is a rehab Project Management & Data Analytics Platform that improves operational & communication efficiency during rehab. Gameplan provides a centralised tool for MDT's to work collaboratively inside a data rich environment VALD Performance, makers of the ForceDecks, ForceFrame, HumanTrak, Dynamo, SmartSpeed, NordBoard. VALD Performance systems are built with the high-performance practitioner in mind, translating traditionally lab-based technologies into engaging, quick, easy-to-use tools for daily testing, monitoring and training Hytro: The world's leading Blood Flow Restriction (BFR) wearable, designed to accelerate recovery and maximise athletic potential using Hytro BFR for Professional Sport.  -  Where to Find Us Keep up to date with everything that is going on with the podcast by following Inform Performance on: Instagram Twitter Our Website - Our Team Andy McDonald Ben Ashworth Steve Barrett  Pete McKnight

    Applelianos
    AirPods Ultra, ¿Con sensores y cámaras?

    Applelianos

    Play Episode Listen Later Feb 9, 2026 59:42


    Apple quiere que tus próximos AirPods Pro no solo te escuchen, sino que también “vean” lo que te rodea. Hablamos de los rumores sobre unos AirPods con cámaras infrarrojas, capaces de entender el espacio en 3D para mejorar el Audio Espacial con Vision Pro, controlar la música con gestos en el aire y convertirse en los nuevos “ojos” de Apple Intelligence. Comentamos qué se sabe de este posible modelo tipo “AirPods Ultra”, qué papel jugaría el chip H3, qué pasa con los actuales AirPods Pro 3 y por qué todo esto abre la puerta a una nueva generación de wearables centrados en la computación espacial… con lanzamiento apuntando a 2026 si las filtraciones aciertan. #AirPodsPro #VisionPro #AppleRumores #AudioEspacial #AppleIntelligence #Wearables #ChipH3 #RealidadEspacial #Apple2026 #GestosAereos https://seoxan.es/crear_pedido_hosting Codigo Cupon "APPLE" PATROCINADO POR SEOXAN Optimización SEO profesional para tu negocio https://seoxan.es https://uptime.urtix.es PARTICIPA EN DIRECTO Deja tu opinión en los comentarios, haz preguntas y sé parte de la charla más importante sobre el futuro del iPad y del ecosistema Apple. ¡Tu voz cuenta! ¿TE GUSTÓ EL EPISODIO? ✨ Dale LIKE SUSCRÍBETE y activa la campanita para no perderte nada COMENTA COMPARTE con tus amigos applelianos SÍGUENOS EN TODAS NUESTRAS PLATAFORMAS: YouTube: https://www.youtube.com/@Applelianos Telegram: https://t.me/+Jm8IE4n3xtI2Zjdk X (Twitter): https://x.com/ApplelianosPod Facebook: https://www.facebook.com/applelianos Apple Podcasts: https://apple.co/39QoPbO

    Clean Truth
    Business & Bullsh*t: Adapt or Die! (EP #73)

    Clean Truth

    Play Episode Listen Later Feb 9, 2026 31:15


    The Founderz Lounge Episode #73 with Don Varady and Steve Bon.Don and Steve are back with another round of Business & Bullsh*t, where real entrepreneurs break down what's happening in business, what's changing in culture, and what owners need to stop whining about and start fixing.This one starts light, then gets real fast.They rip through the top menu trends shaping 2025, from boba and Dubai chocolate to Korean flavors, matcha, protein overload, pickles, and the rise of mocktails as younger consumers drink less.Then Steve pulls a weird one from CES. A lollipop that lets you “listen” to music through bone conduction while you eat it. It turns into a bigger convo about why brands have to create experiences people cannot get from a screen.From there, the episode bounces through Black Rock Coffee's aggressive growth goals and what “Barista First culture” might be doing right, then shifts into theaters reinventing themselves with premium experiences, funnels, and the push toward IMAX and 3D.But the real core of this episode is the Hot Take.“It is kind of the norm now for business owners to complain about staffing issues… When are we gonna stop using it as an excuse? And I feel like we either need to adapt or die.”They unpack why “just pay people more” is not the full answer, how expectations have shifted, why career paths matter more than ever, and what it takes to get people truly engaged. Don shares a direct strategy that has worked for him, and why ownership, not just pay, is what keeps the right people around.They wrap with a Fast Five full of one-liners, old school nostalgia, and a reminder that excuses are expensive.Tune in to hear more...Timestamps:[00:00] Trailer[01:26] Founderz RoundUp[05:39] Music-Playing Lollipop Launch[07:44] Random Bullshit[10:27] Movie Theater Industry Comeback[15:36] Founderz Hot Takes[19:37] Building Career Paths for Employees[24:19] Founderz Fast Five[28:21] Bringing Back Old TechnologyKey Takeaways:  • “It is kind of the norm now for business owners to complain about staffing issues… and I feel like we either need to adapt or die.” ~Don Varady• “The money coming in has to be greater than the money going out. Or why do you own a business?” ~Don Varady• “Where do you want to be here in a few years? What is it that you want? I may not be able to give it to you right now, but let's build a road to get there.” ~Don Varady• Younger employees expect clearer career paths and faster progress, not just a paycheck. ~Steve Bon• Employees want to feel like they are more than a puzzle piece and that their future inside the company actually matters. ~Steve Bon• “If someone feels underserved, underappreciated, and underpaid, no matter what you do, they are not going to be engaged.” ~Steve Bon• Giving people a sense of ownership beyond salary is what actually keeps them invested. ~Don Varady• The brands and businesses that survive are the ones willing to rethink how they motivate people instead of blaming the workforce. ~Steve BonConnect with Don and Steve…Don Varady:Facebook: https://www.facebook.com/don.varady/ Instagram: https://www.instagram.com/donvarady/ LinkedIn: https://www.linkedin.com/in/don-varady-450896145 Steve Bon:LinkedIn: https://www.linkedin.com/in/stephenbon Instagram: https://instagram.com/stevebon8 Tune in to every episode on your favorite platform: Website: https://www.thefounderzlounge.com/ YouTube: https://www.youtube.com/@TheFounderzLounge Spotify: https://open.spotify.com/show/0Nurr4XjBE747qJ9Zjth0G Apple Music: https://podcasts.apple.com/us/podcast/the-founderz-lounge/id1461825349 The Founderz Lounge is Powered By:Clean Eatz:Website: https://cleaneatz.com/Bon's Eye Marketing:Website: https://bonseyeonline.com/ 

    Shed Geek Podcast
    STEEL KINGS: 3D DESIGNER DEMO

    Shed Geek Podcast

    Play Episode Listen Later Feb 9, 2026 47:51 Transcription Available


    Send a textMost post-frame buyers start on their phones. If your sales flow doesn't, you're already behind. We brought Dan and Norma from Idea Room into the studio to show how a mobile 3D configurator pairs with SmartBuild to capture better leads, price faster, and close more barn, shop, and barndominium projects without adding headcount.We start with the buyer journey: predefined styles that snap a basic shell into a premium farm build or wraparound design in one tap. Clean visuals, fast loads, and simple controls keep users engaged long enough to hit submit. That single click does the heavy lifting—contact info, site ZIP for tax and delivery, notes—and then spins up a SmartBuild job automatically. Your team opens it to find precise materials, cut lists, and assembly drawings aligned to supplier catalogs, trimming errors and waste while turning “rough price?” into a real number in minutes.The integration now runs both ways. Sales View pulls the SmartBuild job total back into Idea Room, and with a manual refresh you can sync new edits after changing doors, leans, or structure. Reps manage leads on mobile with click-to-call and status updates, while one SmartBuild specialist dials in the details for accuracy and margin. Setup is fast—mapping colors, doors, windows, and trusses takes minutes when supplier data matches—and branding the configurator on your site builds your pipeline under your logo.We also preview features for complex use cases: interior and divider walls, mezzanines with stairs and railings, and barndominium layouts that feel less like a gamble and more like a guided path to a buildable plan. If you're running SmartBuild today, this is the turbo your process needs; if you're not, you'll see why dealers and builders are adopting the stack for speed, precision, and a better buyer experience.Ready to see it live? Meet us at the NFBA Trade Expo in Oklahoma City, Feb 25–27, for hands-on demos. If this helped, subscribe, share with a builder who needs a faster quote flow, and leave a review to tell us what feature you want next.For more information or to know more about the Shed Geek Podcast visit us at our website.Would you like to receive our weekly newsletter? Sign up on our website.Follow us on Twitter, Instagram, Facebook, or YouTube at the handle @shedgeekpodcast.To be a guest on the Shed Geek Podcast visit our website and fill out the "Contact Us" form.To suggest show topics or ask questions you want answered email us at info@shedgeek.com.This episodes Sponsors:Studio Sponsor: J Money LLC

    Toys on Tap
    Ep. 262 Toys on Tap w/Elwa: Designing Toys That Do Something

    Toys on Tap

    Play Episode Listen Later Feb 9, 2026 52:21


    We sit down with ELWA to talk about choosing creative freedom over turning passion into pressure. While holding a 9-to-5, ELWA builds toys on the side, by design and using that distance to protect experimentation and joy. We trace his path from drawing and video games to UX/UI, 3D modeling, and printing toys inspired by Astro Boy, music, and everyday objects. We dig into painting with intention, designing toys with real utility, and blending culture into cohesive characters. ELWA shares plans for his Lil Vamp line, limited-run art toys, coffin packaging, and a 2026 launch. This episode is about patience, process, and building a toy world on your own terms.On Instagram: @elwa.psdThis Episode is Sponsored by: Empire Blisters – Your go-to source for blister packaging! With 19+ styles and bundle deals, they've got everything you need to make your toys shine. Use code TOYSONTAP10 at checkout for 10% off. Patreon members get 20% off another reason to join!Support the Show on Patreon Unlock exclusive episodes, early access, and behind-the-scenes content: patreon.com/toysontapThanks to Our SupportersRate & Review the Show! Leave a rating and review wherever you listen it's the best way to help Toys on Tap grow!

    Rise of the Podcast
    Are We Cooked, Chat? | Rise of the Podcast #341

    Rise of the Podcast

    Play Episode Listen Later Feb 9, 2026 94:17


    Bryce (AKA MrClyff) joins Jeremy and Kara to talk about being creators in Northern Minnesota (and making food)! We also reminisce about our favorite Star Wars experiences, ramble on with stories of the good ol' days, nerd out about current and upcoming Star Wars shows, books, and games, and talk a little bit about life. Thank you so much for supporting our channel! We love interacting with all of you! We look forward to talking with you guys every week about Star Wars, gaming, 3D printing, pop culture, movies, and everything else! If you want to show your love, consider sending us an email, joining our Discord, or following us on Twitch! We'll see you again soon! ------------------------------------------------------------------- Twitch: http://www.twitch.tv/riseofthepodcast Discord Server Link: https://discord.gg/DcuBKXVxJs Email us: contact@RiseOfThePodcast.com Facebook: https://www.facebook.com/riseofthepodcast Web: http://www.riseofthepodcast.com Twitter: http://www.twitter.com/rotptweets Instagram: https://www.instagram.com/riseofthepodcast Patreon: https://www.patreon.com/RiseofthePodcast Spotify: https://spoti.fi/3qzOazE iTunes: https://apple.co/3wAfwcI Google Podcasts: https://bit.ly/RotPGoogle Thanks for watching! Rise of the Podcast Episode 341: Are We Cooked, Chat? Produced and Edited by 8r0wn13 ©2026 All Rights Reserved #Podcast #DuluthMN #StarWars

    Rise of the Podcast
    Huge News from Ward TCG! | Rise of the Podcast #342

    Rise of the Podcast

    Play Episode Listen Later Feb 9, 2026 118:42


    Creators of WARD TCG Joey and Marty join us to share some big news for the game! Card designs, tournaments, and new formats?! Let's go! We also reminisce about our favorite Star Wars experiences, ramble on with stories of the good ol' days, nerd out about current and upcoming Star Wars shows, books, and games, and talk a little bit about life. Thank you so much for supporting our channel! We love interacting with all of you! We look forward to talking with you guys every week about Star Wars, gaming, 3D printing, pop culture, movies, and everything else! If you want to show your love, consider sending us an email, joining our Discord, or following us on Twitch! We'll see you again soon! ------------------------------------------------------------------- Twitch: http://www.twitch.tv/riseofthepodcast Discord Server Link: https://discord.gg/DcuBKXVxJs Email us: contact@RiseOfThePodcast.com Facebook: https://www.facebook.com/riseofthepodcast Web: http://www.riseofthepodcast.com Twitter: http://www.twitter.com/rotptweets Instagram: https://www.instagram.com/riseofthepodcast Patreon: https://www.patreon.com/RiseofthePodcast Spotify: https://spoti.fi/3qzOazE iTunes: https://apple.co/3wAfwcI Google Podcasts: https://bit.ly/RotPGoogle Thanks for watching! Rise of the Podcast Episode 342: Huge News from Ward TCG! Produced and Edited by 8r0wn13 ©2026 All Rights Reserved #Podcast #DuluthMN #StarWars

    Hacker News Recap
    February 8th, 2026 | Vouch

    Hacker News Recap

    Play Episode Listen Later Feb 9, 2026 15:30


    This is a recap of the top 10 posts on Hacker News on February 08, 2026. This podcast was generated by wondercraft.ai (00:30): VouchOriginal post: https://news.ycombinator.com/item?id=46930961&utm_source=wondercraft_ai(01:58): AI fatigue is real and nobody talks about itOriginal post: https://news.ycombinator.com/item?id=46934404&utm_source=wondercraft_ai(03:27): DoNotNotify is now Open SourceOriginal post: https://news.ycombinator.com/item?id=46932192&utm_source=wondercraft_ai(04:55): I am happier writing code by handOriginal post: https://news.ycombinator.com/item?id=46934344&utm_source=wondercraft_ai(06:24): Slop Terrifies MeOriginal post: https://news.ycombinator.com/item?id=46933067&utm_source=wondercraft_ai(07:52): Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memoryOriginal post: https://news.ycombinator.com/item?id=46930391&utm_source=wondercraft_ai(09:21): I put a real-time 3D shader on the Game Boy ColorOriginal post: https://news.ycombinator.com/item?id=46935791&utm_source=wondercraft_ai(10:49): OpenClaw is changing my lifeOriginal post: https://news.ycombinator.com/item?id=46931805&utm_source=wondercraft_ai(12:18): Omega-3 is inversely related to risk of early-onset dementiaOriginal post: https://news.ycombinator.com/item?id=46935991&utm_source=wondercraft_ai(13:46): The world heard JD Vance being booed at the Olympics. Except for viewers in USAOriginal post: https://news.ycombinator.com/item?id=46931948&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

    Celebrate Muliebrity with Michelle Lyons
    Prolapse, Avulsions & Pessaries: Episode 103 with Dr Melissa Davidson

    Celebrate Muliebrity with Michelle Lyons

    Play Episode Listen Later Feb 9, 2026 75:41


    Hello & Welcome to today's episode, where I'm chatting with Dr Melissa Davidson about prolapse, avulsions and pessaries!My guest today is a specialist physiotherapist in pelvic health, and we discussed her career journey, including her specialization in pelvic health in New Zealand, where she is the only registered specialist. She highlighted her expertise in advanced clinical practice, research, and leadership within the profession. Melissa also shared her experience conducting research in bioengineering at Auckland University, which revealed that physiotherapists' assessments of muscle tone and stiffness lack objective measurement methods, challenging traditional practices in the field...We discussed the importance of evidence-based practice in pelvic health physiotherapy, particularly regarding prolapse and pain management. Melissa shared her experience conducting a PhD in bioengineering, which challenged many accepted beliefs in physiotherapy and highlighted the need to think outside traditional silos. Melissa shared her experience developing and using a 3D model named Lily for patient education, discovering its effectiveness in helping patients understand complex medical concepts. She discussed her collaboration with bioengineers, noting that while the engineers were initially skeptical about her approach, they eventually recognized the value of her clinical perspectiveWe explored the diagnosis of levator avulsion and emphasized that a definitive diagnosis should not be made before 6-12 months postpartum, and we highlighted the importance of careful wording when communicating with patients about this condition, as the diagnosis can be devastating and there is currently no surgical fix.Melissa discussed her approach to pessary management and training for physiotherapists, emphasizing the importance of medical clearance and speculum exams for assessing mucosal integrity. She explained the assessment process for avulsion injuries, including the use of a training model and peer-to-peer learning. Melissa also highlighted the need for informed decision-making during pregnancy regarding delivery options, advocating for patient autonomy and open discussions about birth plans.We definitely agreed on the importance of using validated research and scientific terminology, rather than relying on subjective assessments or unproven treatments.This was definitely a deep and rich conversation, that I really enjoyed...and I hope you will too!Want to learn more about prolapse and other perinatal pelvic health issues, from a whole woman, evidence based, clinical reasoning from assessment to management perspective? And do it all online, with evergreen access to the course AND a private fb support group? Look no further! My new online course, Perinatal Pelvic Rehab, has you covered, from preconception through pregnancy to postpartum (including what we need to be aware of when someone is postnatal AND perimenopausal! - if you work with perinatal women aged 35+, you need to know this!). Visit CelebrateMuliebrity.com for all the course info!Until next time, Onwards & Upwards, Mx

    Dental Assistant Nation
    Episode 422: What Every Dental Assistant Should Know About 3D Printing

    Dental Assistant Nation

    Play Episode Listen Later Feb 9, 2026 15:47


    What if the future of dentistry is already sitting inside your practice and your assistant is the one who unlocks it? Digital dentistry brings speed, confidence, and opportunity into the practice by allowing assistants to create appliances in house, reduce wait times, and deliver same day solutions patients can trust. Scanning and 3D printing streamline workflows, improve patient communication, and give teams pride in providing high quality care without relying solely on outside labs. Through education like DDAA and the Digital Dental Assistant Academy, assistants gain hands-on training, build real confidence with technology, and step into expanded roles that support the entire practice, creating smoother days, stronger outcomes, and a future focused approach to dentistry. Connect with Rochelle Website: https://theddaa.com/ Email: info@theddaa.com Facebook: https://www.facebook.com/DigitalDentalAssistantAcademy Instagram: https://www.instagram.com/digitaldentalassistantacademy/?hl=en Tiktok: https://www.tiktok.com/@digitaldaacademy —------------------------------- Meet me at the Chicago Midwinter Meeting, February 20–21, for two powerful sessions: ✨ Harnessing the Power of Personalities in the Dental Practice February 20, 8:00 AM – 9:30 AM learn how understanding personalities can transform teamwork and communication in your dental practice.

    Your daily news from 3DPrint.com
    3DPOD 292: 3D Printer Product Reviews with Alastair Jennings

    Your daily news from 3DPrint.com

    Play Episode Listen Later Feb 9, 2026 46:01


    Alastair Jennings has been reviewing cameras for a very long time. A chance digression led him to review one of the first RepRap 3D printers. Since then, Alastair has reviewed many dozens of 3D printers over the years for various leading websites, national newspapers, and magazines. He has dozens of systems at home and uses them to make props, camera add-ons, and much more for his work at the local college, where he teaches. Alastair’s knowledge of desktop machines is deep and vast. At the same time, he is also honest and direct; he doesn’t mince his words. We talk to Alastair about the development of desktop 3D printers over the past decade, milestone 3D printers, good printers now, and much more. We talk about the current flock of systems in a very open way and hope that you appreciate Alastair’s deep insight and candor. This episode of the 3DPOD is brought to you by Continuum Powders, industry leaders in sustainable metal powder production. From aerospace to energy, Continuum delivers high-performance powders made from reclaimed materials without compromising quality.   

    Lean Built: Manufacturing Freedom
    Why Goodwill Beats Winning in Business | Lean Built - Manufacturing Freedom E133

    Lean Built: Manufacturing Freedom

    Play Episode Listen Later Feb 9, 2026 50:18


    The way you treat people in business often matters more than the deal itself. Andrew and Jay talk about what happens when something breaks, an emergency hits, or you need a favor...and why companies that build goodwill get help while others get ignored. Drawing on real shop experience, customer behavior, game theory, and a Godfather analogy, they challenge the idea that business is a zero-sum game and argue that collaboration, trust, and shared wins quietly determine who survives and who doesn't.Before that they catch up on what's happening in their shops, covering recent machine work, air and power challenges, and small automation ideas to reduce wasted effort. They talk through using AI for internal software, quoting, and understanding business data; they also talk through websites, first-mover advantage, practical 3D printing workflows, and more.

    Hikes and Mics Podcast
    S13 - Episode #06 - Melanie (Poppi) & Dave (La Bamba)

    Hikes and Mics Podcast

    Play Episode Listen Later Feb 9, 2026 61:42


    Health Hats, the Podcast
    If You Have a Body, You’re an Athlete: Training for MS

    Health Hats, the Podcast

    Play Episode Listen Later Feb 8, 2026 34:26


    Former Nike exec Mark Hochgesang interviews Danny on Heavy Hitter Sports Podcast about MS & being an adaptive athlete. Just back from Belize! Training works. Summary My friend Mark Hochgesang, former Nike exec and host of Heavy Hitter Sports, recently interviewed me. While I usually wear my life on my sleeve on Health Hats, this conversation revealed something different—how I think about myself as an adaptive athlete. Phil Knight’s mantra: “If you have a body, you’re an athlete.” I never thought of it that way until Mark helped me see it. Training to travel? That’s athletic training. Loading a 60-pound wheelchair into an SUV? Strength work. Walking 3,500 steps a day with MS? Competition with myself. Here’s what we covered: