POPULARITY
Categories
In this feed drop from the Internet History Podcast, host Brian McCullough speaks with Chris Dixon, general partner at a16z, about his path from 1980s hobbyist programmer to one of the most prominent venture capitalists in tech. Chris traces his career from quantitative finance to founding SiteAdvisor, cofounding Founder Collective, starting an early machine learning company, and eventually building a16z's crypto practice from the ground up. They also discuss his framework for spotting unconventional investments, the current state of crypto regulation, and why New York is becoming a serious tech hub. Resources: Follow Chris Dixon on X: https://twitter.com/cdixon Follow Brian McCullough on X: https://twitter.com/brianmcc Listen to Internet History Podcast: https://www.youtube.com/@internethistorypodcast Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
00:00-20:00: Team USA men's hockey wins gold. ML looks back on an amazing run. Thanks to Byrne Dairy and CH Insurance. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
ML engineering demand remains high with a 3.2 to 1 job-to-candidate ratio, but entry-level hiring is collapsing as AI automates routine programming and data tasks. Career longevity requires shifting from model training to production operations, deep domain expertise, and mastering AI-augmented workflows before standard implementation becomes a commodity. Links Notes and resources at ocdevel.com/mlg/mla-30 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Data and Displacement ML engineering demand rose 89% in early 2025. Median salary is $187,500, with senior roles reaching $550,000. There are 3.2 open jobs for every qualified candidate. AI-exposed roles for workers aged 22 to 25 declined 13 to 16%, while workers over 30 saw 6 to 12% growth. Professional service job openings dropped 20% year-over-year by January 2025. Microsoft cut 15,000 roles, targeting software engineers, and 30% of its code is now AI-generated. Salesforce reduced support headcount from 9,000 to 5,000 after AI handled 30 to 50% of its workload. Sector Comparisons Creative: Chinese illustrator jobs fell 70% in one year. AI increased output from 1 to 40 scenes per day, crashing commission rates by 90%. Trades: US construction lacks 1.7 million workers. Licensing takes 5 years, and the career fatality risk is 1 in 200. High suicide rates (56 per 100,000) and emerging robotics like the $5,900 Unitree R1 indicate a 10 to 15 year window before automation. Orchestration: Prompt engineering roles paying $375,000 became nearly obsolete in 24 months. Claude Code solves 72% of GitHub issues in under eight minutes. Technical Specialization Priorities Model Ops: Move from training to deployment using vLLM or TensorRT. Set up drift detection and monitoring via MLflow or Weights & Biases. Evaluation: Use DeepEval or RAGAS to test for hallucinations, PII leaks, and adversarial robustness. Agentic Workflows: Build multi-step systems with LangGraph or CrewAI. Include human-in-the-loop checkpoints and observability. Optimization: Focus on quantization and distillation for on-device, air-gapped deployment. Domain Expertise: 57.7% of ML postings prefer specialists in healthcare, finance, or climate over generalists. Industry Perspectives Accelerationists (Amodei, Altman): Predict major disruption within 1 to 5 years. Skeptics (LeCun, Marcus): Argue LLMs lack causal reasoning, extending the adoption timeline to 10 to 15 years. Pragmatists (Andrew Ng): Argue that as code gets cheap, the bottleneck shifts from implementation to specification.
AI is already displacing workers in targeted ways - entry-level knowledge workers are being quietly erased from hiring pipelines, freelancers are getting crushed, and the career ladder is being sawed off at the bottom rungs. Yet ML engineer demand has surged 89% with a 3.2:1 talent deficit and $187K median salary. Covers the real displacement data, lessons from the artist bloodbath, the trades escape hatch, the orchestrator treadmill, expert disagreements on timelines, and concrete short- and long-term career moves for ML engineers. Links Notes and resources at ocdevel.com/mlg/mla-4 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Metrics and Displacement Dynamics ML Market: H1 2025 demand rose 89% with a 3.2 to 1 talent deficit. Median salary is $187,500, while Generative AI specialists earn a 40 to 60 percent premium. The "Quiet" Decline: Macro data shows only 4.5% of total layoffs are AI-attributed, but entry-level hiring is collapsing. Stanford/ADP data shows a 13 to 16 percent employment drop for workers aged 22 to 25 in AI-exposed roles since late 2022. UK graduate job postings fell 67%. Corporate Attrition: Salesforce cut 4,000 roles after AI absorbed 30 to 50 percent of workloads. Microsoft cut 15,000 roles as AI began generating 30% of its code. Amazon cut 30,000 jobs while spending $100 billion on AI infrastructure. Sector Analysis: Creative and Trades Illustrators: Jobs in China's gaming sector fell 70% in one year. Clients accept "good enough" work (80% quality) at 5% of the cost. Western freelance graphic design and writing jobs fell 18.5% and 30% respectively within eight months of ChatGPT's launch. Manual Labor: The U.S. construction industry lacks 1.7 million workers annually, but apprenticeships take five years. Humanoid robotics are advancing, with Unitree's R1 priced at $5,900 and Figure AI robots completing 1,250 runtime hours at BMW. Full automation is 10 to 15 years away, but partial displacement via smaller crews is closer. The Orchestration Treadmill Obsolescence Speed: Prompt engineering roles went from $375,000 salaries to obsolescence in 24 months. AI coding agents like Claude Code now resolve 72% of medium-complexity GitHub issues autonomously. Fragile Expertise: Replacing junior workers with AI prevents the development of future senior talent. New engineers risk "fragile expertise," directed by tools they cannot debug during novel failure modes. Economic and Expert Outlook Macro Risks: Daron Acemoglu warns of "so-so automation" that cuts costs without raising productivity, predicting only 0.66% growth over ten years. "Ghost GDP" describes AI-inflated accounts that fail to circulate because machines do not consume. Expert Camps: Accelerationists (Anthropic, OpenAI) predict human-level AI by 2027. Skeptics (LeCun, Marcus) argue LLMs are a dead end lacking world models. Pragmatists (Andrew Ng) suggest shifting from implementation to specification as the cost of code nears zero. Tactical Adaptation for ML Engineers Immediate Skills: Master production ML systems, MLOps, LLM evaluation, and safety engineering. Ability to manage deployment risks and hallucination detection is the primary hiring differentiator. Long-term Moats: Focus on "Small AI" (on-device, private), mechanistic interpretability, and deep domain knowledge in healthcare, logistics, or climate science. The Playbook: Optimize for the current three to five year window. Move from being a model builder to a product-focused engineer who understands business tradeoffs and regulatory compliance.
Disturbing pics of Stephen Hawking on Epstein Island released, USA Men's Hockey visits Trump, Nancy Guthrie reward raised, Bonnie Blue knocked up, and Trudi fights her toilet. Programming Note: Marcie Hume (Corey Feldman vs. The World) and Lita Ford will join us tomorrow. The State of the Union is going down tonight. The US Men's Hockey Team is getting some heat following their recent communication with Donald Trump. Savannah Guthrie is now offering a $1M reward for her mother Nancy. Some turds are threatening to boycott the Met Gala due to Jeff Bezos' sponsorship. Stephen Hawking photos have emerged of him living it up on Epstein Island. Drew confirms John Lenon's weiner is uncirc'd. AI confirms they all were uncircumcised. Legacy Partner's drops a new $50 gift card winner. Congrats to _____________! Darren McCarty dropped by the studio today for ML's Soul of Detroit. TJ Miller is in town. Check him out in Royal Oak this weekend. Jim Breuer is popping off at American Airlines. Mickey Redmond's grandson, Teddy, has a rare form of leukemia and could use financial help. A BAFTAs judge has quit following the n-word incident. Eric Dane's family is still fundraising. Rebecca Gayheart has broken her silence. Hey Taylor Swift... why you look different? Cruz Beckham and the Breakers are the hot new rock act. Andy Dick remains in physical shambles. Lisa Rinna has been drugged... in front of everyone. Some people are saying she might have been over served. The Olympic Men's Hockey Final is the most watched pre-9am sports event in history. Evan Dando of The Lemonheads can't catch a break. Trudi destroyed her toilet.Drew's hot water heater took a dump. Drew was nearly bamboozled by credit card thieves again. It's tax season. Hooray. Steven Spielberg is bailing on California for New York. Congressman Tony Gonzales has himself quite the scandal. Is Bonnie Blue really pregnant or is this all a stunt? Maury Povich wants nothing to do with the situation. Drew reeducated himself on the crimes of D.B. Cooper. The trial has resumed for the Alexander Brothers. Merch is still available. Buy it before it's gone. If you'd like to help support the show… consider subscribing to our YouTube Channel, Facebook, Instagram and Twitter (Drew Lane, Marc Fellhauer, Trudi Daniels, Jim Bentley and BranDon)
Send a textIn this episode, Kay Suthar sits down with Patrick Twitchett to break down why efficiency and optimisation should be at the core of every business. Patrick, founder of CASE MASTERMIND and widely known as “The Simplifier,” shares how entrepreneurs can increase income, reduce unnecessary costs, and simplify operations without sacrificing growth. They explore the power of masterminds, the principle that you are the average of the five people you spend the most time with, and why proximity can dramatically shift your results. Patrick also dives into the difference between to-do lists and calendars, how to properly calculate your professional rate, and why outsourcing is often the smartest financial move you can make. If you've ever felt overwhelmed, overworked, or stuck in complexity, this episode is your reminder that less really is more.What to expect in this episode: (00:00) – Why efficiency and optimisation drive business growth (04:10) – Lessons from Rich Dad Poor Dad and Think and Grow Rich (07:40) – Living by the principle “less is more” (11:20) – The real difference between to-do lists and calendars (15:00) – How to calculate your professional hourly rate (18:50) – Why outsourcing can actually make you more money (22:30) – The power of masterminds and proximity (26:40) – A mastermind member repurposing a marketing strategy in real timeAbout Patrick TwitchettPatrick Twitchett is the founder of CASE MASTERMIND. He helps entrepreneurs optimise costs and improve income through his consultancy service Simplies, combining the words simple and supplies. Known as “The Simplifier,” Patrick supports business owners in streamlining operations and building stronger financial foundations. He also speaks regularly on the CASE Broadcast alongside Melvyn Manning as MēL and PāT, discussing business growth and personal development.Connect with Patrick TwitchettWebsite: https://www.casemastermind.co.uk/Email: patrick.twitchett@simplies.co.ukFacebook Group: https://www.facebook.com/groups/CASEmastermind/Instagram: https://www.instagram.com/case_mastermind/YouTube: https://www.youtube.com/channel/UCW0rA_8xhXgFZZApcG4QXswTwitter: https://twitter.com/CASEmastermindLinkedIn: https://www.linkedin.com/company/casenetworking/FREE Gift from PatrickSign up as a Chrome member and receive the CASE Mastermind newsletter: https://casemastermind.co.uk/Connect with Kay SutharBusiness Website: https://makeyourmarkagency.com/Podcast Website: https://www.makeyourmarkpodcast.com/LinkedIn: https://www.linkedin.com/in/kay-suthar-make-your-mark/Facebook Group: https://www.facebook.com/groups/482037820744114Email: kay@makeyourmarkagency.comFREE Gifts from Kay Suthar:3 Ultimate Secrets to Getting Booked on Podcasts: https://getbookedonpodcast.com5 Simple Steps to Launch Your Podcast in 14 Days: https://14daystolaunch.com
Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
Darren McCarty is into wrestling, but ML and Marc are lovers, not fighters, so they ask D Mac to tell […]
Most people in AI are trying to give AIs ‘good' values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all.In Max's view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough.It's a vision that springs from the worldview laid out in If Anyone Builds It, Everyone Dies, the recent book by Eliezer Yudkowsky and Nate Soares, two of Max's colleagues at the Machine Intelligence Research Institute.To Max, the book's core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinction.And Max thinks misalignment is the default outcome. Consider evolution: its “goal” for humans was to maximise reproduction and pass on our genes as much as possible. But as technology has advanced we've learned to access the reward signal it set up for us, pleasure — without any reproduction at all, by having sex while on birth control for instance.We can understand intellectually that this is inconsistent with what evolution was trying to design and motivate us to do. We just don't care.Max thinks current ML training has the same structural problem: our development processes are seeding AI models with a similar mismatch between goals and behaviour. Across virtually every training run, models designed to align with various human goals are also being rewarded for persisting, acquiring resources, and not being shut down.This leads to Max's research agenda. The idea is to train AI to be “corrigible” and defer to human control as its sole objective — no harmlessness goals, no moral values, nothing else. In practice, models would get rewarded for behaviours like being willing to shut themselves down or surrender power.According to Max, other approaches to corrigibility have tended to treat it as a constraint on other goals like “make the world good,” rather than a primary objective in its own right. But those goals gave AI reasons to resist shutdown and otherwise undermine corrigibility. If you strip out those competing objectives, alignment might follow naturally from AI that is broadly obedient to humans.Max has laid out the theoretical framework for “Corrigibility as a Singular Target,” but notes that essentially no empirical work has followed — no benchmarks, no training runs, no papers testing the idea in practice. Max wants to change this — he's calling for collaborators to get in touch at maxharms.com.Links to learn more, video, and full transcript: https://80k.info/mh26This episode was recorded on October 19, 2025.Chapters:Cold open (00:00:00)Who's Max Harms? (00:01:22)A note from Rob Wiblin (00:01:58)If anyone builds it, will everyone die? The MIRI perspective on AGI risk (00:04:26)Evolution failed to 'align' us, just as we'll fail to align AI (00:26:22)We're training AIs to want to stay alive and value power for its own sake (00:44:31)Objections: Is the 'squiggle/paperclip problem' really real? (00:53:54)Can we get empirical evidence re: 'alignment by default'? (01:06:24)Why do few AI researchers share Max's perspective? (01:11:37)We're training AI to pursue goals relentlessly — and superintelligence will too (01:19:53)The case for a radical slowdown (01:26:07)Max's best hope: corrigibility as stepping stone to alignment (01:29:09)Corrigibility is both uniquely valuable, and practical, to train (01:33:44)What training could ever make models corrigible enough? (01:46:13)Corrigibility is also terribly risky due to misuse risk (01:52:44)A single researcher could make a corrigibility benchmark. Nobody has. (02:00:04)Red Heart & why Max writes hard science fiction (02:13:27)Should you homeschool? Depends how weird your kids are. (02:35:12)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore
Furf and Monty are back with another Pulm PEEPs Pearls episode. The topic of today’s discussion is an often discussed, but often misunderstood, test; the methacholine challenge. They’ll review when to utilize this test, how it should be performed, and the appropriate interpretation. Contributors This episode was prepared with research by Pulm PEEPs Associate Editor George Doumat. Dustin Latimer, another Pulm PEEPs Associate Editor, assisted with audio and video editing. Key Learning Points What the Test Measures Methacholine challenge is a direct bronchial provocation test of airway hyperresponsiveness (AHR), a core physiologic feature of asthma. Anyone will bronchoconstrict at high enough concentrations — the test looks for an abnormal threshold. The key endpoint is the PC20: the methacholine concentration causing a 20% fall in FEV1. Abnormal in adults: PC20 ≤ 8–16 mg/mL Test Performance Meta-analyses: pooled sensitivity ~60%, specificity ~90%. Real-world cohorts: sensitivity 55–62%, specificity 56–100% (varies by population, protocol, and threshold used). Not a standalone yes/no test — best used as part of a broader diagnostic pathway. Where It Fits in the Asthma Workup The test belongs in a stepwise approach: Step 1: Spirometry + bronchodilator response Step 2: Add FeNO and/or peak flow variability (if available) Step 3: If the picture is still unclear → methacholine challenge It is most useful for symptomatic patients with normal spirometry and no bronchodilator reversibility. Given its cost, mild risk, and discomfort, it should not be a first-line test — most asthma diagnoses do not require it. Technique and Medication Prep Technique ERS guidelines favor tidal breathing over deep inspiratory maneuvers. Deep breaths can be bronchoprotective and blunt the response, reducing sensitivity — especially in mild or well-controlled asthma. Medication Washout (to Avoid False Negatives) Medication ClassWashout PeriodShort-acting beta-agonists (SABA)≥ 6 hoursLong-acting beta-agonists (LABA)~24 hoursUltra-long-acting beta-agonists~48 hoursShort-acting anticholinergics (e.g., ipratropium)~12 hoursLong-acting muscarinic antagonists (LAMA, e.g., tiotropium)7 days Inhaled corticosteroids, leukotriene blockers, and antihistamines do not significantly affect the test acutely — continue these. Withdrawing ICS also carries its own risk for asthma patients. Practical tip: Spell out exactly what to hold and when — for both the patient and the PFT lab — at the time the test is ordered. Interpreting Results Negative Test (PC20 > 16 mg/mL) Very high negative predictive value in symptomatic adults. Makes current asthma quite unlikely (assuming proper test conduct). This is the test’s greatest strength: it is an excellent rule-out test. Positive Test (PC20 ≤ 8–16 mg/mL) More nuanced — airway hyperresponsiveness is not unique to asthma. Can be positive in: chronic cough, allergic rhinitis, COPD, and even some healthy asymptomatic individuals. A positive result raises probability but must be interpreted alongside the clinical story, variable respiratory symptoms, peak flow variability, FeNO, and ICS response. Safety and Risks Overall, the test is quite safe; significant adverse effects are rare. Temporary breathing discomfort is expected (bronchoconstriction is being induced). Severe bronchospasm is possible: A trained clinician should be available; SABA inhaler/nebulizer must be immediately on hand; a physician should be reachable in the facility. Contraindications / cautions: Avoid if FEV1 < 70% predicted or < 1–1.5 L (baseline obstruction greatly increases risk). Avoid within 3 months of an acute cardiac event (rare risk of cardiac events with unstable cardiac disease). Five Pearls — Quick Recap What it tests: Methacholine challenge is a direct test of AHR with high specificity but variable sensitivity — it belongs inside a diagnostic pathway, not as a standalone asthma test. When to use it: Most useful for symptomatic patients with normal spirometry and no bronchodilator response, after FeNO and peak flow variability have been considered. Technique and meds matter: Use tidal breathing protocol; respect washout intervals — especially the 7-day LAMA washout and 24–48 hour LABA window — to avoid false negatives. Safety: Generally safe, but can induce significant bronchoconstriction. Have a SABA available and avoid the test in patients with FEV1 < 70% predicted. Interpretation: A negative test (PC20 > 16 mg/mL) strongly argues against current asthma. A positive test raises probability but is not specific — interpret alongside the full clinical picture. References and Further Reading Coates AL, Wanger J, Cockcroft DW, Culver BH; Bronchoprovocation Testing Task Force: Kai-Håkon Carlsen; Diamant Z, Gauvreau G, Hall GL, Hallstrand TS, Horvath I, de Jongh FHC, Joos G, Kaminsky DA, Laube BL, Leuppi JD, Sterk PJ. ERS technical standard on bronchial challenge testing: general considerations and performance of methacholine challenge tests. Eur Respir J. 2017 May 1;49(5):1601526. doi: 10.1183/13993003.01526-2016. PMID: 28461290. Lee, J., & Song, J. U. (2021). Diagnostic comparison of methacholine and mannitol bronchial challenge tests for identifying bronchial hyperresponsiveness in asthma: a systematic review and meta-analysis. Journal of Asthma, 58(7), 883–891. https://doi.org/10.1080/02770903.2020.1739704 Davis BE, Blais CM, Cockcroft DW. Methacholine challenge testing: comparative pharmacology. J Asthma Allergy. 2018 May 14;11:89-99. doi: 10.2147/JAA.S160607. PMID: 29785128; PMCID: PMC5957064.
Você está sendo julgado pela sua imagem antes mesmo de abrir a boca.Neste episódio, a Dra. Najla Toledo, especialista em harmonização Full Face, revela como a estética se tornou o novo "dress code" do sucesso. Se você sente que seu rosto está "derretendo" ou que sua imagem não condiz com sua autoridade, você está perdendo o jogo do mercado.Esqueça os procedimentos exagerados. A Dra. Najla explica a ciência por trás da sustentação óssea e o impacto brutal que a harmonização natural traz para a confiança de empresários e líderes. Aprenda por que o "barato sai caro" no mundo do Botox e como o gerenciamento do envelhecimento pode salvar sua carreira.Disponível no Youtube:Link: https://youtu.be/HT9iYrLbtxI00:00:11 - Especialista em naturalidade: Dra. Najla Toledo.00:09:26 - A polêmica dos 14ML e a analogia do Ketchup.00:11:43 - Por que preencher o "bigode chinês" pode ser um erro.00:18:31 - Ácido Hialurônico vs. Bioestimuladores: qual a diferença?.00:31:04 - Famosos que deram errado: como evitar o rosto de balão.00:45:13 - Por que o seu Botox não dura? A ciência da dose.01:03:55 - O colapso inflamatório: como a dieta afeta seu rosto.01:19:05 - Condição Especial: 1 ML extra para seguidores Excepcionais.Siga a Dra. Najla no Instagram:https://www.instagram.com/dra.najlavicentini/Nos Siga:Marcelo Toledo: https://www.instagram.com/marcelotoledoInstagram: https://www.instagram.com/excepcionaispodcastTikTok: https://www.tiktok.com/@excepcionaispodcast
COME DEPURARE IL FEGATO - WILMER ZANGHIRATI URBANAZ(dottore in farmacia, erborista, naturopata, Presidente Associazione Italiana Naturopati) Per gli ascoltatori di Border Nights sconto speciale del 10% inserendo il codice BORDER10 su https://areapharm.it/collections/all FLUIDERB NAC 600 30 CAPSULEhttps://areapharm.it/products/fluiderb-nac-600-30cps?_pos=1&_sid=3c17981ee&_ss=r DEPUREX 300 MLhttps://areapharm.it/products/depurex-300ml?_pos=1&_psq=DE&_ss=e&_v=1.0 DRENOSLIM 300 ML https://areapharm.it/products/drenoslim-300ml?_pos=1&_sid=5355959af&_ss=rDiventa un supporter di questo podcast: https://www.spreaker.com/podcast/border-nights--654467/support.
Blood vitamin D levels, not supplement dose, determine breast cancer risk, with studies showing roughly a 40% to 50% lower risk once levels rise into protective ranges Women who maintain blood vitamin D levels around 50 to 60 ng/mL experience the greatest protection, while levels below 20 ng/mL consistently link to higher and more aggressive breast cancer risk Large pooled analyses and clinical trials show breast cancer risk drops step by step as vitamin D levels increase, with no evidence of harm at higher physiological levels Sunlight, exercise, and metabolic health strongly influence how much vitamin D actually reaches and protects breast tissue, explaining why intake alone often falls short Addressing low vitamin D by combining sunlight, targeted supplementation, exercise, and metabolic support turns vitamin D into a measurable, trackable strategy for long-term breast cancer prevention
On this week's Out of Ten, Sage is back to share his thoughts on a metal albums and tracks that are currently up there for both his albums and songs of the year! He reviews new albums from Converge, MØL, and Softcult, and Michael gives us his sad girl thoughts on Charli XCX's Wuthering Heights […] The post Episode 351 – Nightmare Tripping appeared first on Out Of Ten Podcast.
0:00 - It's time for a /ML&K tradition: Brett explains why you should be optimistic about the Rockies and their upcoming season. Except this time, he's tempering his enthusiasm. There's still a long road ahead, but it seems like the Rockies might actually maybe possibly potentially make some slight changes that set them in the right direction.17:16 - Should we be concerned about the Nuggets right now? Are there cracks starting to form in this (alleged) championship roster?30:52 - Oh, by the way...Danika Mason is an Australian sports reporter, and she's in Milan to cover the Olympics. She did a live hit on Australian TV while absolutely hammered, and it's fantastic.
Episode 213: HIV PrEP Review H. Nicole Magaña, medical student, reviews the history of PrEP and outlines the currently FDA-approved medications used for HIV prevention. Dr. Arreaza provides additional perspective on long-acting injectable options, including how quickly they begin to protect patients after initiation. Written by Nicole Magana, MSIV, American University of the Caribbean. Comments and edits by Hector Arreaza, MD. You are listening to Rio Bravo qWeek Podcast, your weekly dose of knowledge brought to you by the Rio Bravo Family Medicine Residency Program from Bakersfield, California, a UCLA-affiliated program sponsored by Clinica Sierra Vista, Let Us Be Your Healthcare Home. This podcast was created for educational purposes only. Visit your primary care provider for additional medical advice. Pre-exposure prophylaxis for HIV. Previous episodes related to HIV: -Episode 67, HIV history (September 2021) -Episode 68, HIV transmissibility (October 2021) -Episode 70 (October 2021), HIV prevention (including HIV Prep with oral medications) -Episode 98 (June 2022), we introduced Apretude, the first injectable for HIV PrEP. Apretude was approved in December 2021. What is Pre-Exposure prophylaxis (PrEP)? Pre-exposure prophylaxis, or PrEP, is the use of antiretroviral medications taken by individuals who are HIV-negative to prevent HIV acquisition. There are 30,000 new HIV infections annually in the US. How effective is it? When taken as prescribed, PrEP is highly effective at reducing the risk of HIV transmission through sexual exposure and injection drug use. Patients who are adherent to PrEP can lower their risk of contracting HIV by 99%. The effectiveness of oral PrEP is highly adherence dependent. In trials with 70% adherence, the relative risk of HIV acquisition was 0.27, compared to 0.51 with 40-70% adherence and no significant benefit with adherence ≤40%. How does PrEP work? PrEP works by maintaining therapeutic drug levels in the bloodstream and in target tissues. If HIV exposure occurs, viral replication is inhibited, preventing the establishment of infection. Brief History of PrEP. The concept of PrEP originated from early animal studies demonstrating that antiretroviral medications could prevent retroviral transmission when administered before exposure. In 2010, the iPrEx trial showed that daily oral tenofovir disoproxil fumarate (known as Truvada) with emtricitabine significantly reduced HIV acquisition among men who have sex with men and transgender women. This was the first large clinical trial to demonstrate the effectiveness of PrEP. In 2012, the FDA approved oral Truvada, which is TDF/FTC (tenofovir disoproxil and emtricitabine) for HIV prevention. Since then, additional studies have expanded indications and introduced new formulations, including long-acting injectable options. Who Should Be Offered PrEP? PrEP should be considered for any HIV-negative individual at increased risk of HIV acquisition, including Men who have sex with men, transgender individuals, heterosexual men and women with an HIV-positive partner, individuals with recent bacterial sexually transmitted infections, people who inject drugs, individuals engaging in condomless sex with partners of unknown HIV status. Remember that PrEP should be offered in a nonjudgmental, patient-centered manner, make it a safe space to talk openly about prevention of HIV. Available HIV PrEP Options. Daily Oral PrEP: There are 2 formulations of Tenofovir. There is Tenofovir disoproxil fumarate (TDF)/ Truvada and Tenofovir alafenamide (TAF)/ Descovy. Each is available in a tablet combined with Emtricitabine a nucleoside reverse transcriptase inhibitor. Truvada: It is approved for all populations at risk through sexual exposure or injection drug use. Something to look out for before starting this medication is for pre-existing CKD. Do not give to patients who have an estimated glomerular filtration rate of less than 60 mL/min. (6) Descovy: This option is approved for men who have sex with men and transgender women but is not approved for individuals at risk through receptive vaginal sex. It has less impact on renal function and bone mineral density compared to Truvada. It can be used in moderately reduced kidney function (GFR between 30-60 mL/min). Truvada and Descovy are taken orally once a day. After patients start taking these medications, when are they considered to be protected? Nicole: With daily oral PrEP, guidelines differ with WHO and International Aids Society-USA stating it takes about 7 days, while CDC states 21 days to allow for adequate concentration in tissues (1). Adherence is critical for efficacy. Injectable HIV PrEP. In 2021, the FDA approved the first Injectable PrEP option Long-acting cabotegravir (CAB-LA)- known on the market as Apretude. Cabotegravir is an integrase strand transfer inhibitor administered as an intramuscular injection.Dosing consists of an initial injection, a second injection one month later, and then maintenance injections every two months (1). Another option is Lenacapavir (Yeztugo). The Yeztugo as a pre-exposure prophylaxis (PrEP) for HIV in Oct 2024. Yeztugo is the first and only FDA-approved HIV prevention treatment that requires just two injections per year, offering a long-acting option for people who weigh at least 35kg. It is given as 2 injections every 6 months. First dose is given with 2 tablets on Day 1 and Day 2, then every 6 months 2 injections on the same day. Clinical trials, including HPTN 083 and HPTN 084, demonstrated that injectable cabotegravir is superior to daily oral PrEP in preventing HIV infection. This advantage is largely due to improved adherence rather than differences in intrinsic drug potency. There have been no head-to-head comparisons between Yeztugo and Apretude, but they are both very effective. Apretude starts protecting 7 days after the first dose, and Yeztugo starts protecting 2 hours after Day 2 (if patient takes the oral loading dose) or 3-4 weeks if no oral load is taken. Injectable PrEP is particularly beneficial for patients who struggle with daily pill adherence, have trouble swallowing pills, prefer a discreet option, have difficulty storing their medication or have renal or bone disease that limits the use of tenofovir-based regimens like Truvada and Descovy (6). In one unpublished report by Medline, patients who received Apretude had an increase in bone mineral density compared to those who received Truvada (1). Tests prior to starting PrEP. Before initiating PrEP, patients must be confirmed to be HIV-negative. Baseline evaluation includes HIV testing with a fourth-generation antigen/antibody assay, HIV RNA testing if acute infection is suspected, renal function testing for oral PrEP, Hepatitis B screening, sexually transmitted infection screening, and pregnancy testing when appropriate. PrEP should not be started in individuals with known or suspected acute HIV infection. Monitoring for patients on HIV PrEP. Monitoring typically includes HIV testing every 2 to 3 months, STI screening every 3 to 6 months, renal function monitoring for those on oral PrEP (tenofovir- based), ongoing adherence and risk-reduction counseling. And for injectable PrEP, adherence to the injection schedule is essential, as delayed dosing may increase the risk of resistance if HIV infection occurs. HIV PrEP is not a prevention for other STIs. Screening for STIs and counseling about prevention is essential. Breakthrough HIV infections on PrEP are rare and most often associated with poor adherence or delayed diagnosis. Truvada is more studied in all populations and is considered safe during pregnancy and breastfeeding. There is less data regarding the injectable option in patients who are pregnant, may become pregnant, or whose primary risk factor is injection drug use (1). Injectable PrEP provides an important alternative for patients with chronic kidney disease and bone disease (1). Key Takeaway Pre-exposure prophylaxis is a safe, effective, and evidence-based strategy for HIV prevention. With both daily oral and long-acting injectable options available, PrEP can be individualized to meet patient needs. Normalizing PrEP discussions in clinical practice is essential to reducing new HIV infections and advancing public health goals. Even without trying, every night you go to bed a little wiser. Thanks for listening to Rio Bravo qWeek Podcast. We want to hear from you, send us an email at RioBravoqWeek@clinicasierravista.org, or visit our website riobravofmrp.org/qweek. See you next week! References: Antiretroviral Drugs for Treatment and Prevention of HIV in Adults: 2024 Recommendations of the International Antiviral Society–USA Panel. The Journal of the American Medical Association. 2025. Gandhi RT, Landovitz RJ, Sax PE, et al. Long-Acting Lenacapavir Acts as an Effective Preexposure Prophylaxis in a Rectal SHIV Challenge Macaque Model. The Journal of Clinical Investigation. 2023. Bekerman E, Yant SR, VanderVeen L, et al. Pharmacokinetics and Safety of Once-Yearly Lenacapavir: A Phase 1, Open-Label Study. Lancet. 2025. Jogiraju V, Pawar P, Yager J, et al.
What does it take to design a programming language from scratch when the target isn't just CPUs, but GPUs, accelerators, and the entire AI stack? In this episode, I sit down with legendary language architect Chris Lattner to talk about Mojo — his ambitious attempt to rethink systems programming for the machine learning era. We trace the arc from LLVM and Clang to Swift and now Mojo, unpacking the lessons Chris has carried forward into this new language. Mojo aims to combine Python's ergonomics with C-level performance, but the real story is deeper: memory ownership, heterogeneous compute, compile-time metaprogramming, and giving developers precise control over how AI workloads hit silicon. Chris shares the motivation behind Modular, why today's AI infrastructure demands new abstractions, and how Mojo fits into a rapidly evolving ecosystem of ML frameworks and hardware backends. We also dig into developer experience, safety vs performance tradeoffs, and what it means to build a language that spans research notebooks all the way down to kernel-level execution.
What if your data platform could power both critical business decisions and real-time product features at scale? In this episode, host Benjamin sits down with Magnus Dahlbäck, Senior Director of Data and Platform at Voi, to explore how a metrics-first approach and semantic layers transform data accessibility, why traditional ML and LLMs require different strategies for different problems, and how to balance FinOps costs while processing billions of IoT events daily. Whether you're building data infrastructure for a high-growth company or rethinking how your organization consumes data, this conversation is packed with practical strategies for unlocking data value and preparing your platform for AI. Tune in to discover how Voi ditched traditional BI tools and revolutionized their approach to enterprise analytics.
00:00-25:00: What should we expect from the new Buffalo Bills' 3-4 defense? ML breaks it down from system to players and more. Thanks to Batavia Downs Gaming and CH Insurance. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
Presidential candidate (?) Rahm Emanuel joins ML and Marc to talk about everything but his political ambitions. STRAIGHT DOPEWho's Rahm Emanuel, […]
00:00-20:00: ML says there is nothing to fear about the Yankees in 2026. Cashman, Boone, aging players and more. Run it back again to just crash in October. Thanks to Byrne Dairy and CH Insurance. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
Rahul Raja is a Staff Software Engineer at LinkedIn, working on large-scale search infrastructure, information retrieval systems, and integrating AI/ML to improve ranking and semantic search experiences.The Future of Information Retrieval: From Dense Vectors to Cognitive Search // MLOps Podcast #362 with Rahul Raja, Staff Software Engineer at LinkedInJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractInformation Retrieval is evolving from keyword matching to intelligent, vector-based understanding. In this talk, Rahul Raja explores how dense retrieval, vector databases, and hybrid search systems are redefining how modern AI retrieves, ranks, and reasons over information. He discusses how retrieval now powers large language models through Retrieval-Augmented Generation (RAG) and the new MLOps challenges that arise, embedding drift, continuous evaluation, and large-scale vector maintenance.Looking ahead, the session envisions a future of Cognitive Search, where retrieval systems move beyond recall to genuine reasoning, contextual understanding, and multimodal awareness. Listeners will gain insight into how the next generation of retrieval will bridge semantics, scalability, and intelligence, powering everything from search and recommendations to generative AI.// BioRahul is a Staff Engineer at LinkedIn, where he focuses on search and deployment systems at scale. Rahul is a graduate from Carnegie Mellon University and has a strong background in building reliable, high-performance infrastructure. He has led many initiatives to improve search relevance and streamline ML deployment workflows.// Related LinksWebsite: https://www.linkedin.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rahul on LinkedIn: /rahulraja963/Timestamps:[00:00] Vector Search for Media[00:33] RAG and Search Evolution[04:45] Cognitive vs Semantic Search[08:26] High Value Search Signals[16:43] Scaling with Embeddings[22:37] BM25 Benchmark Bias[29:00] Video Search Use Cases[31:21] Context and Search Tradeoff[35:04] Personal Memory Augmentation[39:03] Future of Cognitive Search[44:51] Access Control in Vectors[49:14] Search Ranking Challenge[54:43] Hard Search Problems Solved[58:29] Freshness vs Cost[1:02:12] Wrap up
00:00-20:00: ML breaks down the Sabres. How they got here and are the playoffs happening, finally, in WNY? Thanks to Byrne Dairy and Western OTB. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
In today's Cloud Wars Minute, I analyze the leadership shift at Workday and what it means in the age of agentic AI.Highlights00:00 — I want to talk about a change at the top of Workday. And I want to point out somebody who's been a real superstar in this business and that's Workday co-founder, former co-CEO, former CEO, chairman, executive chairman, resigned as CEO, now back in as CEO, Aneel Bhusri.01:13 — He was going to be the person that ran all the business, the operations. And Aneel said, "I can go back to what I truly love," which is developing products and strategy. Carl Eschenbach left about a week ago. The board asked Bhusri to step back in as CEO, and he's done that. So there's no question that Aneel Bhusri's first love is products and strategy.02:24 — He said, “Now, with Carl Eschenbach coming in a couple of years ago, now I can go do this stuff I really love around products and strategy.” It is this thing about never being trained to do it. He's on the board of directors at General Motors, a highly accomplished executive in a lot of ways. Aneel certainly doesn't need the money.03:13 — How does a company like Workday or Oracle or SAP or Salesforce balance those two things, the enterprise applications that brought them here, and the agentic AI that has to take them forward? Workday, several months ago, announced Workday ERP. From the outside, you've got SAP and Oracle always aggressively trying to go after Workday customers.03:59 — I want to mention about Aneel, the way he manages. He said, “I've sort of become”— this is when machine learning, ML, was really becoming hot — “I became the Pied Piper of Workday. I was just going around to all the different developers and engineering teams and just asking developers and engineering teams over and over and over again, what are you doing with ML?"04:56 — And now they've got two great president-level executives at Workday. Rob Enslin and Gerrit Kazmaier. I think it's very likely that about a year from now, Workday will announce that Bhusri is going to become co-CEO and elevate one of those two, Enslin or Kazmaier, to the co-CEO role with him. Visit Cloud Wars for more.
Episode 212: Managing HFpEFHyo Mun and Jordan Redden (medical students) explain how to manage HFpEF with medications and touch some basics about nonpharmacologic treatments. Dr. Arreaza asks insightful questions to guide the discussion. Written by Hyo Mun, MSIV, American University of the Caribbean; and Jordan Redden, MSIV, Ross University School of Medicine. Comments by Hector Arreaza, MD.You are listening to Rio Bravo qWeek Podcast, your weekly dose of knowledge brought to you by the Rio Bravo Family Medicine Residency Program from Bakersfield, California, a UCLA-affiliated program sponsored by Clinica Sierra Vista, Let Us Be Your Healthcare Home. This podcast was created for educational purposes only. Visit your primary care provider for additional medical advice.Treatment of HFpEFArreaza: Mike, if you had to name the one therapy everyone with HFpEF should be on, what is it?Mike: That's easy! SGLT-2 inhibitors. This is the one slam-dunk we have in HFpEF. Empagliflozin (Jardiance) or dapagliflozin (Farxiga) should be started in essentially every patient with HFpEF, and it doesn't matter if they have diabetes or not.Jordan: And that's worth repeating, because people still think of these as “diabetes drugs.” They're not anymore. In HFpEF, SGLT-2 inhibitors reduce heart-failure hospitalizations, improve symptoms, improve quality of life, and even reduce cardiovascular death.Dr. Arreaza: They're also simple. Empagliflozin 10 mg daily or dapagliflozin 10 mg daily. No titration, no drama. The effectiveness of these meds was established around 2019 with DAPA-HF and later with DELIVER. These were trials thatdemonstrated that dapagliflozin reduces worsening heart failure and cardiovascular events across the full spectrum of heart failure, from reduced to preserved ejection fraction, independent of diabetes status.Mike: And the number needed to treat is about 28 to prevent one heart-failure hospitalization. That's excellent for a disease where we historically had almost nothing that worked.Jordan: They're also safe in chronic kidney disease down to an eGFR of about 25, which makes them even more useful in this population.Dr. Arreaza: Alright. We got SGLT-2 inhibitor, what's next?Mike: Volume management. Loop diuretics are still the backbone of symptom control in HFpEF. If the patient is volume overloaded, you diurese, and you diurese aggressively.Jordan: The goal is euvolemia. Dry weight, no edema, no orthopnea, no waking up gasping for air. A lot of these patients end up needing chronic oral loop diuretics to stay there.Dr. Arreaza: Something to remember: HFpEF patients don't tolerate congestion well, and being “a little wet” is not benign. Let's move into RAAS inhibition. Where do ARBs and ACE inhibitors fit in?Mike: Between ARBs and ACE inhibitors, ARBs are the winners in HFpEF. They actually reduce heart failure hospitalizations—drugs like candesartan, losartan, valsartan. ACE inhibitors? Not so much. They showed minimal benefit in older HFpEF patients, which is why we go with ARBs instead.Jordan: But a lot of clinicians get nervous about ACE inhibitors and ARBs because of kidney function, so it's worth talking through how these drugs actually work in the kidney.Dr. Arreaza: Yes, misunderstanding may lead to unnecessary drug discontinuation.Jordan: Under normal conditions, the afferent arteriole brings blood into the glomerulus, and the efferent arteriole is constricted by angiotensin II. That constriction keeps pressure high in the glomerulus and maintains filtration.Mike: Here's what happens with an ACE inhibitor: you block angiotensin II, the efferent arteriole relaxes, glomerular pressure drops, and GFR dips slightly. Creatinine bumps up a little, and that scares people, but that's actually the whole point—that's how you get kidney protection long-term.Jordan: High intraglomerular pressure causes hyperfiltration injury and scarring over time. Lowering that pressure protects the kidney long-term. The short-term GFR drop is the price you pay for long-term benefits.Dr. Arreaza: So let's talk about CKD, because this is where people panic.Mike: Right. ACE inhibitors and ARBs are not contraindicated in chronic kidney disease. In fact, they're recommended even in advanced stages. They reduce progression to kidney failure by about a third.Jordan: The key is how you use them. Start low. Check creatinine and potassium one to two weeks after starting, then periodically. A creatinine rise up to 30% from baseline is acceptable. That's not kidney injury, that's physiology.Dr. Arreaza: And what about potassium creeping up?Mike: You adjust the dose or add a potassium binder. You don't just automatically stop the drug.Dr. Arreaza: Now there is one absolute contraindication everyone needs to know about! (board exam test)Jordan: Bilateral renal artery stenosis. This is the big one. In these patients, the kidneys are completely dependent on angiotensin II–mediated efferent constriction to maintain GFR. Take that away, and GFR collapses.Mike: Creatinine can jump dramatically within days. If you see a creatinine rise of 20% or more shortly after starting an ACE inhibitor, you should be thinking about bilateral renal artery stenosis and stopping the drug immediately.Dr. Arreaza: After revascularization, though, many patients can tolerate ACE inhibitors again, so this isn't always permanent. What about cardiorenal syndrome? That's where things get uncomfortable.Mike: It is uncomfortable, but cardiorenal syndrome isn't a contraindication. These patients have severe heart failure and kidney disease, and their mortality is actually higher than patients with heart failure alone.Jordan: ACE inhibitors still reduce mortality and slow kidney disease progression in this group. Studies show that stopping ACE inhibitors during acute heart-failure admissions increases in-hospital mortality three- to four-fold.Dr. Arreaza: So we are cautious, but we don't avoid it.Mike: Exactly. Start low, titrate slowly, monitor labs closely, accept up to a 30% creatinine rise. You only stop if kidney function keeps worsening, or potassium gets dangerously high.Dr. Arreaza: Alright. Let's move on. What about mineralocorticoid receptor antagonists… MRA?Jordan: Spironolactone or eplerenone might reduce hospitalizations in HFpEF, but the data is mixed. This is more of a “select patients” situation.Mike: And you have to watch potassium and kidney function carefully, especially if they're already on an ACE inhibitor or ARB.Dr. Arreaza: What about sacubitril-valsartan, also known as Entresto®?Mike: Entresto may help patients with mildly reduced EF roughly in the 45 to 57% range. It's not first-line for HFpEF, but in select patients, it's reasonable.Dr. Arreaza: Now let's clarify one of the biggest sources of confusion: beta blockers.Jordan: Beta blockers are not a treatment for HFpEF itself. They're only indicated if the patient has another reason to be on them, like coronary disease or atrial fibrillation.Mike: And timing really matters here. You absolutely do not start beta blockers during acute decompensated heart failure. Their negative inotropic effects can make things worse when patients are volume overloaded.Jordan: But, and this is critical, you also don't stop them if the patient is already taking one. Abrupt withdrawal causes a sympathetic surge and dramatically increases mortality.Dr. Arreaza: If a patient is admitted on a beta blocker, what do we do?Mike: Continue it at the same dose or reduce it slightly if they're really unstable. Once they're euvolemic and stable, you can carefully titrate up.Jordan: And watch for chronotropic incompetence. HFpEF patients often rely on heart-rate response to exercise, and beta blockers can worsen exercise intolerance.Dr. Arreaza: Beyond medications, HFpEF is really about treating comorbidities. Aerobic activity can be an initial strategy to improve exercise intolerance and has evidence of improving aerobic function and quality of life. Sodium restriction: improves symptoms, does not decrease risk of death or hospitalizations.Mike: Hypertension control is huge. For diabetes, the SGLT-2 inhibitors will perform double duty. For obesity, weight loss improves symptoms, and GLP-1 agonists like semaglutide are absolute gamechangers.Jordan: Don't forget sleep apnea, atrial fibrillation, and lifestyle. Exercise improves the quality of life, even if it doesn't change hard outcomes. Lifestyle is the main treatment. Dr. Arreaza: And when should you refer to cardiology?Mike: You should refer when the diagnosis isn't clear; symptoms are not responding to treatment, difficult volume management, end-organ dysfunction, or if you are concerned about advanced heart failure.Dr. Arreaza: So, it has been a great discussion. What is the takeaway?Mike: HFpEF treatment isn't about one magic drug -- it's about volume control, SGLT2 inhibitors, smart use of RAAS blockade, and aggressive management of comorbidities.Jordan: And it's understanding the physiology, so you don't withhold life-saving therapies out of fear.Dr. Arreaza: Well said. If you found this helpful, share it with a friend or colleague and rate us wherever you listen. This is Dr. Arreaza, signing off.Jordan/Mike: Thanks! Even without trying, every night you go to bed a little wiser. Thanks for listening to Rio Bravo qWeek Podcast. We want to hear from you, send us an email at RioBravoqWeek@clinicasierravista.org, or visit our website riobravofmrp.org/qweek. See you next week! _____________________References:Barzin A, Barnhouse KK, Kane SF. Heart Failure With Preserved Ejection Fraction. Am Fam Physician. 2025;112(4):435-440.Heidenreich PA, Bozkurt B, Aguilar D, et al. 2022 AHA/ACC/HFSA guideline for the management of heart failure. Circulation. 2022;145(18):e895-e1032.Kittleson MM, Panjrath GS, Amancherla K, et al. 2023 ACC expert consensus decision pathway on management of heart failure with preserved ejection fraction. J Am Coll Cardiol. 2023;81(18):1835-1878.Anker SD, Butler J, Filippatos G, et al. Empagliflozin in heart failure with a preserved ejection fraction. N Engl J Med. 2021;385(16):1451-1461.Solomon SD, McMurray JJV, Claggett B, et al. Dapagliflozin in heart failure with mildly reduced or preserved ejection fraction. N Engl J Med. 2022;387(12):1089-1098.Pitt B, Pfeffer MA, Assmann SF, et al. Spironolactone for heart failure with preserved ejection fraction. N Engl J Med. 2014;370(15):1383-1392.Yusuf S, Pfeffer MA, Swedberg K, et al. Effects of candesartan in patients with chronic heart failure and preserved left-ventricular ejection fraction. Lancet. 2003;362(9386):777-781.Solomon SD, McMurray JJV, Anand IS, et al. Angiotensin-neprilysin inhibition in heart failure with preserved ejection fraction. N Engl J Med. 2019;381(17):1609-1620.Kosiborod MN, Abildstrøm SZ, Borlaug BA, et al. Semaglutide in patients with heart failure with preserved ejection fraction and obesity. N Engl J Med. 2023;389(12):1069-1084.Xie Y, Xu E, Bowe B, Al-Aly Z. Long-term cardiovascular outcomes of COVID-19. Nat Med. 2022;28(3):583-590.Puntmann VO, Carerj ML, Wieters I, et al. Outcomes of cardiovascular magnetic resonance imaging in patients recently recovered from COVID-19. JAMA Cardiol. 2020;5(11):1265-1273.Basso C, Leone O, Rizzo S, et al. Pathological features of COVID-19-associated myocardial injury. Eur Heart J. 2020;41(39):3827-3835.Nalbandian A, Sehgal K, Gupta A, et al. Post-acute COVID-19 syndrome. Nat Med. 2021;27(4):601-615.Badve SV, Roberts MA, Hawley CM, et al. Effects of angiotensin-converting enzyme inhibitors and angiotensin receptor blockers in adults with estimated GFR less than 60 mL/min per 1.73 m². Ann Intern Med. 2024;177(8):953-963.Navis G, Faber HJ, de Zeeuw D, de Jong PE. ACE inhibitors and the kidney: a risk-benefit assessment. Drug Saf. 1996;15(3):200-211.Textor SC, Novick AC, Tarazi RC, et al. Critical perfusion pressure for renal function in patients with bilateral atherosclerotic renal vascular disease. Ann Intern Med. 1985;102(3):308-314.Hackam DG, Spence JD, Garg AX, Textor SC. Role of renin-angiotensin system blockade in atherosclerotic renal artery stenosis and renovascular hypertension. Hypertension. 2007;50(6):998-1003.Ronco C, Haapio M, House AA, et al. Cardiorenal syndrome. J Am Coll Cardiol. 2008;52(19):1527-1539.Prins KW, Neill JM, Tyler JO, et al. Effects of beta-blocker withdrawal in acute decompensated heart failure. JACC Heart Fail. 2015;3(8):647-653.Jondeau G, Neuder Y, Eicher JC, et al. B-CONVINCED: Beta-blocker CONtinuation Vs. INterruption in patients with Congestive heart failure hospitalizED for a decompensation episode. Eur Heart J. 2009;30(18):2186-2192.Theme song, Works All The Time by Dominik Schwarzer, YouTube ID: CUBDNERZU8HXUHBS, purchased from https://www.premiumbeat.com/.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode, Subhajit Paul joins SE Radio host Kanchan Shringi to discuss how enterprise resource planning (ERP) systems work in practice and where machine learning and generative AI are beginning to fit into real-world ERP environments. Subhajit grounds the conversation in ERP fundamentals, explaining core business flows such as order-to-cash, procure-to-pay, and plan-to-produce, and why ERP systems are central to running large enterprises. He then walks through the realities of ERP implementation, sharing examples of both successful and failed projects and highlighting common challenges around testing, process coverage, integrations, and change management. The discussion also explores how AI is being applied in ERP today, including practical ML use cases such as inventory optimization and anomaly detection, as well as emerging generative AI and agent-based approaches. Brought to you by IEEE Computer Society and IEEE Software magazine.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
In this episode of Data in Biotech, host Ross Katz sits down with James Yoder, Founder and CEO of OpenBench, to unpack a radical new approach to early-stage drug discovery. James shares how OpenBench's "success-driven" model shifts risk away from biotech partners by only charging for validated hits. They dive deep into computational screening, molecular modeling, and the company's evolving tech stack that's making hit discovery smarter and more accessible. Discover how data, AI, and strategic collaboration are redefining biotech R&D. What you'll learn in this episode: >> Why OpenBench moved away from SaaS to a success-based service model >> How their computational platform predicts binding affinity and screens trillions of compounds >> The role of data flywheels and ML in improving drug discovery success rates >> Real-world case studies from biotech collaborations >> How OpenBench evaluates druggable targets in one week Meet our guest James Yoder is the Founder and CEO of OpenBench. With a background in statistics, data science, and applied machine learning, he leads OpenBench's mission to deliver validated drug discovery hits through computational innovation and a success-driven business model. About the host Ross Katz is Principal and Data Science Lead at CorrDyn. Ross specializes in building intelligent data systems that empower biotech and healthcare organizations to extract insights and drive innovation. Connect with our guest: Sponsor: CorrDyn, a data consultancyConnect with James Yoder on LinkedIn Connect with us: Follow the podcast for more insightful discussions on the latest in biotech and data science.Subscribe and leave a review if you enjoyed this episode!Connect with Ross Katz on LinkedIn Sponsored by… This episode is brought to you by CorrDyn, the leader in data-driven solutions for biotech and healthcare. Discover how CorrDyn is helping organizations turn data into breakthroughs at CorrDyn.
What looks like a novelty on the shelf can be a very real business when the fundamentals are right.In this episode of Business of Drinks, we sit down with John King, co-founder and owner of The Original Pickle Shot, to unpack how a bartender-born ritual turned into a nationally scaled spirits brand.The numbers tell the story. The Original Pickle Shot is now selling roughly 110K 9-liter cases annually, growing ~15% year over year, and ranks as the 10th largest flavored vodka in the U.S. — all without outside investment. What many assume is a niche product is, in reality, a high-velocity business driven by occasion, community, and repeat purchase.John walks through what product-market fit actually looked like for the brand — not hype or marketing spend, but watching depletions rise organically as consumers pulled the product through retail. Early success came off-premise first, with 50 mL bottles driving trial and 750 mLs becoming the fastest-growing format as the brand earned its place in party and tailgate occasions.For founders, this episode is a candid look at the trade-offs of staying self-funded. John shares how reinvesting every dollar back into the business forced discipline around expansion, prevented “false volume,” and slowed state rollouts until the company had the operational backbone to support them. The cost: Years of personal sacrifice and saying no to capital. The benefit: Control, speed of decision-making, and sustainable velocity.Distributors and retailers will appreciate John's clear-eyed take on partnerships — why beer vs. spirits houses matter less than alignment on expectations and margins — and how fun, irreverent brands still need hard data to win shelf space.If you're building, selling, or scaling a drinks brand and want a grounded example of how a so-called niche becomes a category leader, this conversation delivers real-world lessons.For the latest updates, follow us:Business of Drinks:YouTubeLinkedInInstagram @bizofdrinksErica Duecy, co-host: Erica Duecy is founder and co-host of Business of Drinks and one of the drinks industry's most accomplished digital and content strategists. She runs the consultancy and advisory arm of Business of Drinks and has built publishing and marketing programs for Drizly, VinePair, SevenFifty, and other hospitality and drinks tech companies.LinkedInInstagram @ericaduecyScott Rosenbaum, co-host: Scott Rosenbaum is co-host of Business of Drinks and a veteran strategist and analyst with deep experience building drinks portfolios. Most recently, he was the Portfolio Development Director at Distill Ventures. Prior to that, he was the Vice President of T. Edward Wines & Spirits, a New York-based importer and distributor.LinkedInCaroline Lamb, contributor: Caroline is a producer and on-air contributor at Business of Drinks and a key account sales and marketing specialist at AHD Vintners, a Michigan-based importer and distributor.LinkedInInstagram @borkalineIf you enjoyed today's conversation, follow Business of Drinks wherever you're listening, and don't forget to rate and review us. Your support helps us reach new listeners passionate about the drinks industry. Thank you!
#Bàigiảng của linh mục #GBPhươngĐìnhToại trong thánh lễ Đức Mẹ Lộ Đức, cử hành lúc 17:30 ngày 11-2-2026 tại Nhà nguyện Trung tâm Mục vụ #TGPSG
Numen Technologies Limited, is an Irish technology company driven by a simple but powerful principle: privacy is at the heart of everything they do, and in the modern age of AI, this is so important. To find out more about what they do I caught up with one of their co-founders Jeethu Rao. Jeethu talks about his background, on device SLM's, current AI, moravec paradox, Chat GPT and more. More about Numen technologies Limited: Numen technologies was founded in 2020, in Dublin, Ireland and they specialise in on-device machine learning and care deeply about privacy. They also build ML powered products that are private by default. Numen build three products that put privacy first. Private LLM is an on-device AI assistant offering fully private, subscription-free intelligence on Mac, iPhone, and iPad. Slop or Not uses models trained on millions of samples, optimised for Apple Neural Engine, to detect AI-generated text and images. Clean Links strips tracking from URLs and reveals what's behind shortened links and QR codes. Everything processes locally. No tracking. See more podcasts here.
Erika Erickson is BACK! ML is, too, but from Quebec City. Marc is always here. ALWAYS. STRAIGHT DOPE Erika and […]
Astronomer's Steven Hillion reveals how OpenAI, Anthropic, Uber, and Lyft use Apache Airflow to orchestrate AI and machine learning pipelines at scale on AWS.Topics Include:Steven Hillion leads data and AI at AstronomerApache Airflow surpassed Spark and Kafka in community metricsAstronomer coordinates data flow like conductor orchestrating instrumental platformsOrganizations with data engineering teams use Airflow at scaleCustomers already used Airflow for ML before official promotionUber and Lyft orchestrate pricing models using AirflowAstronomer runs on AWS with close integration partnershipsOpenAI Anthropic and GitHub Copilot use Airflow for operationsInternal data team uses Airflow creating feedback loopsEvolved from constrained AI reports to agentic workflowsPlatform monitors generative AI output quality at user interactionsMetadata and context increasingly critical for AI applicationsLearn more at Astronomer's Data FlowCast podcastParticipants:Steven Hillion – SVP, Data and AI, AstronomerSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Numen Technologies Limited, is an Irish technology company driven by a simple but powerful principle: privacy is at the heart of everything they do, and in the modern age of AI, this is so important. To find out more about what they do I caught up with one of their co-founders Jeethu Rao.Jeethu talks about his background, on device SLM's, current AI, moravec paradox, Chat GPT and more.More about Numen technologies Limited: Numen technologies was founded in 2020, in Dublin, Ireland and they specialise in on-device machine learning and care deeply about privacy. They also build ML powered products that are private by default.Numen build three products that put privacy first. Private LLM is an on-device AI assistant offering fully private, subscription-free intelligence on Mac, iPhone, and iPad. Slop or Not uses models trained on millions of samples, optimised for Apple Neural Engine, to detect AI-generated text and images. Clean Links strips tracking from URLs and reveals what's behind shortened links and QR codes. Everything processes locally. No tracking.
This week on DanceSpeak, I sit down with Brian 'Footwork' Green, a master teacher and influential figure in street and club dance culture whose impact spans generations. Recorded live in August 2025, this episode captures Brian's unfiltered thoughts on musicality, lineage, and what often gets misunderstood about street dance. We explore competition versus convention culture, the realities of the dance economy, and the difference between who you are and the artistic name you move under. Brian speaks honestly about off-beat dancing, “auto-tuned” movement, teaching, trends, and what gets lost when dance drifts away from the heart. The conversation also touches on race, representation, and identity in dance spaces—layered, nuanced, and rooted in lived experience rather than soundbites. Insightful, funny, challenging, and deeply grounded in culture, this episode is for dancers who love dance enough to think about it, question it, and keep it alive. Instagram – https://www.instagram.com/gogalit Website – https://www.gogalit.com/ Fit From Home – https://galit-s-school-0397.thinkific.com/courses/fit-from-home You can connect with Brian on Instagram https://www.instagram.com/brianfootworkgreen/. You can purchase Brian's on-line dance classes https://www.theybarelyunderstandhello.com/#classes.
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
Episode 211: Understanding HFpEF. Hyo Mun and Jordan Redden (medical students) explain the pathophysiology of heart failure with preserved ejection fraction (HFpEF) and how it differentiates from HFrEF. Dr. Arreaza asks insightful questions and summarizes some key elements of HFpEF. Written by Hyo Mun, MS4, American University of the Caribbean; and Jordan Redden, MS4, Ross University School of Medicine. Comments and edits by Hector Arreaza, MD.You are listening to Rio Bravo qWeek Podcast, your weekly dose of knowledge brought to you by the Rio Bravo Family Medicine Residency Program from Bakersfield, California, a UCLA-affiliated program sponsored by Clinica Sierra Vista, Let Us Be Your Healthcare Home. This podcast was created for educational purposes only. Visit your primary care provider for additional medical advice.What is EF? Just imagine, the heart is a pump, blood gets into the heart through the veins, the ventricles fill up and then squeeze the blood out. So, the percent of blood that is pumped out is the EF. Let's start at the beginning. What is HFpEF?Mike: HFpEF stands for heart failure with preserved ejection fraction. Basically, these patients squeeze normally—their ejection fraction is 50% or higher—but here's the thing: the heart can't relax and fill the way it should. The muscle gets stiff, almost like a thick leather boot that just won't stretch. And because the ventricle can't fill properly, pressure starts backing up into the lungs and the rest of the body. That's when patients start experiencing shortness of breath, leg swelling, fatigue—all those classic symptoms.Dr. Arreaza: And this is where people get fooled by the ejection fraction.Mike: Exactly. The ejectionfraction tells you total left ventricular emptying, not just forward flow.Jordan: The classic example is severe mitral regurgitation. You can eject 60% of your blood volume and still be in cardiogenic shock because most of that blood is leaking backward into the left atrium instead of going into the aorta. So, you get pulmonary edema, hypotension, fatigue, all with a “normal” EF. Which is honestly terrifying if you're over-relying on echo reports without thinking clinically.Dr. Arreaza: And in HFpEF, functional mitral regurgitation often shows up later in the disease. It's not usually the primary cause; it's more of a marker of advanced disease. Moderate to severe MR in HFpEF independently predicts worse outcomes, including a higher risk of mortality or heart failure hospitalization. So, let's contrast this with HFrEF. How are these two different?Mike: HFrEF—heart failure with reduced ejection fraction—is a pumping problem. The heart muscle is weak and can't contracteffectively. Ejection fraction drops below 40%, and this is your classic systolic dysfunction.Jordan: HFpEF, on the other hand, is diastolic dysfunction. The heart muscle is thick, fibrotic, and noncompliant. It squeezes fine, but it just doesn't relax, even though the EF looks reassuring on paper.Mike: I like to explain it this way: HFrEF is a weak heart that can't squeeze. HFpEF is a stiff heart that can't relax. Totally different problems.Dr. Arreaza: And then there's the gray zone: heart failure with mildly reduced EF, or HFmrEF. That's an EF between 41 and 49% with evidence of elevated filling pressures. It really shares the features of both worlds. So, what actually causes HFpEF versus HFrEF?Jordan: HFpEF is basically what happens when all the problems of modern living catch up with you. You've got chronic hypertension, obesity, diabetes, metabolic syndrome, aging, systemic inflammation—all of these things slowly remodel the heart over years. The muscle gets thick and stiff, and eventually the ventricle just loses its ability to relax. So, HFpEF is really a disease of metabolic dysfunction and chronic stress in the heart. Mike: HFrEF is more about direct injury. Think about myocardial infarctions, ischemic cardiomyopathy, viral myocarditis, alcohol toxicity, chemotherapy like doxorubicin, genetic cardiomyopathies, or chronic uncontrolled tachycardia. These insults actually damage or kill heart muscle cells, leading to a dilated, weak ventricle that can't pump effectively.Dr. Arreaza: So the short version: HFpEF is caused by chronic metabolic and hypertensive stress, while HFrEF is caused mainly by myocardial damage. A question we get a lot: does HFpEF eventually turn into HFrEF? What do you guys think?Mike: In most cases, no. HFpEF patients usually stay HFpEF throughout their disease course. They don't just “burn out” and turn into HFrEF.Jordan: They're generally separate disease entities with different pathophysiology. A patient with HFpEF can develop HFrEF if they have a big myocardial infarction or ongoing ischemia that damages the muscle, but that's not the natural progression.Mike: Interestingly though, the opposite can happen. Some HFrEF patients actually improve their ejection fraction with good medical therapy—that's called HF with improved EF—and it's a great sign that treatment is working.Dr. Arreaza: Another question. How do HFpEF and HFrEF compare to restrictive cardiomyopathy and constrictive pericarditis?Jordan: Clinically, they can all look very similar: dyspnea, edema, fatigue, but the underlying mechanisms are completely different.Mike: In HFpEF, the myocardium itself is stiff from hypertrophy and fibrosis. The problem is intrinsic to the heart muscle, and EF stays preserved. Echoshows diastolic dysfunction with elevated filling pressures.Jordan: In HFrEF, the myocardium is weak. The ventricle is often dilated and contracts poorly, with a reduced EF.Mike: Restrictive cardiomyopathy is different. Here, the myocardium gets infiltrated by abnormal stuff—amyloid, iron, sarcoid—and that makes it extremely stiff. It can look like HFpEF on the surface, but it's usually more severe. On Echo You'll see biatrial enlargement, small ventricles, and preserved EF. And importantly, it's a pathologic diagnosis, so you need advanced imaging or biopsy to confirm it.Jordan: Constrictive pericarditis is another mimic, but here the myocardium is usually normal. The problem is that the pericardium is thickened, calcified, and rigid. This will physically prevent the heart from being filled. Imaging shows pericardial thickening, septal bounce, and respiratory variation in flow, and cath shows equalization of diastolic pressures, which is the hallmark of constrictive pericarditis.Dr. Arreaza: So the takeaway is: HFpEF is a clinical syndrome driven by common metabolic and hypertensive causes, while restrictive and constrictive diseases are specific pathologic entities. If “HFpEF” is unusually severe or not responding to treatment, you need to think beyond HFpEF. Which type of heart failure is more common right now?Mike: Good question, the answer is: HFpEF. It now accounts for up to 60% of all heart failure cases, and it's still rising.Dr. Arreaza: Why is that?Jordan: Because people are living longer, gaining weight, and developing more metabolic syndrome. HFpEF thrives in older, or people with obesity, hypertension, or diabetes: basically, the modern American population. At the same time, better treatment of acute MIs means fewer people are developing HFrEF from massive heart attacks.Mike: HFpEF is the heart failure epidemic of the 21st century. It's honestly the cardiology equivalent of type 2 diabetes.Dr. Arreaza: Let's talk aboutCOVID-19. (2025 and still talking about it) Does it actually increase heart failure risk?Mike: Yes, absolutely. COVID increases both acute and long-term heart failure risk.Jordan: During acute infection, COVID can cause myocarditis, trigger massive inflammation, and precipitate acute decompensated heart failure, especially in patients with pre-existing disease. It also causes microthrombi, which can injure the myocardium.Mike: And after infection, even mild cases are linked to a significantly higher risk of developing new heart failure within the following year. Both HFpEF and HFrEF rates go up.Dr. Arreaza: I remember seeing this in 2021, we had a patient with acute COVID and HFrEF, her EF was about 10%, I lost contact with the patient and at the end I don't know what happened to her. What's the pathophysiology of COVID and heart failure?Mike: COVID causes direct viral injury through ACE2 receptors, triggers massive inflammation that damages the endothelium and heart muscle, leads to microvascular clotting and fibrosis—all mechanisms that promote HFpEF.Jordan: Add autonomic dysfunction, persistent low-grade inflammation, and worsening metabolic syndrome, and you've got a perfect storm for heart failure.Dr. Arreaza: Bottom line: COVID is a cardiovascular disease as much as a respiratory one. If someone had COVID and now has unexplained dyspnea or fatigue, think about heart failure. Get an echo, get a BNP, start treatment. Last big question: why did we have so many therapies for HFrEF but essentially none for HFpEF for years?Mike: HFrEF is mechanistically straightforward. You've got a weak heart with excessive neurohormonal activation going on — so you block RAAS, block the sympathetic system, drop the afterload. The drugs make sense.Jordan: HFpEF is messy. It's not one disease. It's stiffness, fibrosis, inflammation, microvascular dysfunction, metabolic disease, atrial fibrillation, all overlapping. One drug can't fix all of that.Mike: And some drugs that worked beautifully in HFrEF actually made HFpEF worse. Take Beta blockers, for example. They slow heart rate, which is a problem because HFpEF patients rely on heart rate to maintain their cardiac output.Jordan: The breakthrough came with SGLT-2 inhibitors: diabetes drugs that unexpectedly addressed multiple HFpEF mechanisms at once: volume, metabolism, inflammation, and myocardial energetics.Dr. Arreaza: The miracle drug for HFpEF! Alright, let's wrap up.Mike: Bottom line: HFpEF is common, complex, and dangerous: even if the EF looks “normal.”Jordan: And if you're relying on ejection fraction alone, HFpEF will humble you every time.Dr. Arreaza: If you liked this episode, share it with a friend or a colleague and rate us wherever you listen. This is Dr. Arreaza, signing off.Even without trying, every night you go to bed a little wiser. Thanks for listening to Rio Bravo qWeek Podcast. We want to hear from you, send us an email at RioBravoqWeek@clinicasierravista.org, or visit our website riobravofmrp.org/qweek. See you next week! _____________________References:Barzin A, Barnhouse KK, Kane SF. Heart Failure With Preserved Ejection Fraction. Am Fam Physician. 2025;112(4):435-440.Heidenreich PA, Bozkurt B, Aguilar D, et al. 2022 AHA/ACC/HFSA guideline for the management of heart failure. Circulation. 2022;145(18):e895-e1032.Kittleson MM, Panjrath GS, Amancherla K, et al. 2023 ACC expert consensus decision pathway on management of heart failure with preserved ejection fraction. J Am Coll Cardiol. 2023;81(18):1835-1878.Anker SD, Butler J, Filippatos G, et al. Empagliflozin in heart failure with a preserved ejection fraction. N Engl J Med. 2021;385(16):1451-1461.Solomon SD, McMurray JJV, Claggett B, et al. Dapagliflozin in heart failure with mildly reduced or preserved ejection fraction. N Engl J Med. 2022;387(12):1089-1098.Pitt B, Pfeffer MA, Assmann SF, et al. Spironolactone for heart failure with preserved ejection fraction. N Engl J Med. 2014;370(15):1383-1392.Yusuf S, Pfeffer MA, Swedberg K, et al. Effects of candesartan in patients with chronic heart failure and preserved left-ventricular ejection fraction. Lancet. 2003;362(9386):777-781.Solomon SD, McMurray JJV, Anand IS, et al. Angiotensin-neprilysin inhibition in heart failure with preserved ejection fraction. N Engl J Med. 2019;381(17):1609-1620.Kosiborod MN, Abildstrøm SZ, Borlaug BA, et al. Semaglutide in patients with heart failure with preserved ejection fraction and obesity. N Engl J Med. 2023;389(12):1069-1084.Xie Y, Xu E, Bowe B, Al-Aly Z. Long-term cardiovascular outcomes of COVID-19. Nat Med. 2022;28(3):583-590.Puntmann VO, Carerj ML, Wieters I, et al. Outcomes of cardiovascular magnetic resonance imaging in patients recently recovered from COVID-19. JAMA Cardiol. 2020;5(11):1265-1273.Basso C, Leone O, Rizzo S, et al. Pathological features of COVID-19-associated myocardial injury. Eur Heart J. 2020;41(39):3827-3835.Nalbandian A, Sehgal K, Gupta A, et al. Post-acute COVID-19 syndrome. Nat Med. 2021;27(4):601-615.Badve SV, Roberts MA, Hawley CM, et al. Effects of angiotensin-converting enzyme inhibitors and angiotensin receptor blockers in adults with estimated GFR less than 60 mL/min per 1.73 m². Ann Intern Med. 2024;177(8):953-963.Navis G, Faber HJ, de Zeeuw D, de Jong PE. ACE inhibitors and the kidney: a risk-benefit assessment. Drug Saf. 1996;15(3):200-211.Textor SC, Novick AC, Tarazi RC, et al. Critical perfusion pressure for renal function in patients with bilateral atherosclerotic renal vascular disease. Ann Intern Med. 1985;102(3):308-314.Hackam DG, Spence JD, Garg AX, Textor SC. Role of renin-angiotensin system blockade in atherosclerotic renal artery stenosis and renovascular hypertension. Hypertension. 2007;50(6):998-1003.Ronco C, Haapio M, House AA, et al. Cardiorenal syndrome. J Am Coll Cardiol. 2008;52(19):1527-1539.Prins KW, Neill JM, Tyler JO, et al. Effects of beta-blocker withdrawal in acute decompensated heart failure. JACC Heart Fail. 2015;3(8):647-653.Jondeau G, Neuder Y, Eicher JC, et al. B-CONVINCED: Beta-blocker CONtinuation Vs. INterruption in patients with Congestive heart failure hospitalizED for a decompensation episode. Eur Heart J. 2009;30(18):2186-2192.Theme song, Works All The Time by Dominik Schwarzer, YouTube ID: CUBDNERZU8HXUHBS, purchased from https://www.premiumbeat.com/.
In this episode of DataTalks.Club, Paul Iusztin, founding AI engineer and author of the LLM Engineer's Handbook, breaks down the transition from traditional software development to production-grade AI engineering. We explore the essential skill stack for 2026, the shift from "PoC purgatory" to shipping real products, and why the future of the field belongs to the full-stack generalist.You'll learn about:- Why the role is evolving into the "new software engineer" and how to own the full product lifecycle.- Identifying when to use traditional ML (like XGBoost) over LLMs to avoid over-engineering.- The architectural shift from fine-tuning to mastering data pipelines and semantic search.- Reliable Agentic Workflows- How to use coding assistants like Claude and Cursor to act as an architect rather than just a coder.- Why human-in-the-loop evaluation is the most critical bottleneck in shipping reliable AI.- How to build a "Second Brain" portfolio project that proves your end-to-end engineering value.Links:- Course link: https: https://academy.towardsai.net/courses/agent-engineering?ref=b3ab31- Decoding AI Magazine: https://www.decodingai.com/TIMECODES:00:00 From code to cars: Paul's journey to AI07:08 Deep learning and the autonomous driving challenge12:09 The transition to global product engineering15:13 Survival guide: Data science vs. AI engineering22:29 The full-stack AI engineer skill stack29:12 Mastering RAG and knowledge management32:27 The generalist edge: Learning with AI42:21 Technical pillars for shipping AI products54:05 Portfolio secrets and the "second brain"58:01 The future of the LLM engineer's handbookThis talk is designed for software engineers, data scientists, and ML engineers looking to move beyond proof-of-concepts and master the engineering rigors of shipping AI products in a production environment. It is particularly valuable for those aiming for founding or lead AI roles in startups.Connect with Paul- Linkedin - https://www.linkedin.com/in/pauliusztin/- Website - https://www.pauliusztin.ai/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/
Nick Gillian is the Co-Founder and CTO at Archetype AI, working on physical AI foundation models that understand and reason over real-world sensor data.Physical AI: Teaching Machines to Understand the Real World // MLOps Podcast #360 with Nick Gillian, Co-Founder and CTO of Archetype AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide/ AbstractAs AI moves beyond the cloud and simulation, the next frontier is Physical AI: systems that can perceive, understand, and act within real-world environments in real time. In this conversation, Nick Gillian, Co-Founder and CTO of Archetype AI, explores what it actually takes to turn raw sensor and video data into reliable, deployable intelligence.Drawing on his experience building Google's Soli and Jacquard and now leading development of Newton, a foundational model for Physical AI, Nick discusses how real-time physical understanding changes what's possible across safety monitoring, infrastructure, and human–machine interaction. He'll share lessons learned translating advanced research into products that operate safely in dynamic environments, and why many organizations underestimate the challenges and opportunities of AI in the physical world.// BioNick Gillian, Ph.D., is Co-Founder and CTO of Archetype AI with over 15 years of experience turning advanced AI and interaction research into real-world products. At Archetype, he leads the AI and engineering teams behind Newton—a first-of-its-kind Physical AI foundational model that can perceive, understand, and reason about the physical world. Before co-founding Archetype, Nick was a Senior Staff Machine Learning Engineer at Google and a researcher at MIT, where he developed AI and ML methods for real-time sensor understanding. At Google's Advanced Technology and Projects group, he led machine learning research that powered breakthrough products like Soli radar and Jacquard, and helped advance sensing algorithms across Pixel, Nest, and wearable devices.// Related LinksWebsite: https://www.archetypeai.io/https://www.archetypeai.io/blog/timefusion-newton https://www.nature.com/articles/s41598-023-44714-2https://www.youtube.com/watch?v=Pow4utY9teU https://www.youtube.com/watch?v=uE0jjdzwe9w https://arxiv.org/abs/2410.14724 Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Nick on LinkedIn: /nick-gillian-b27b1094/Timestamps:[00:00] Physical Agent Framework[00:56] Physical AI Clarification[06:53] Building a Repair Model[12:41] World Models and LLMs[17:17] Data Weighting Strategies[24:19] Data Diversity vs Quantity[38:30] R&D and Product Creation[41:22] Construction Site Data Shipping[50:33] Wrap up
In our latest ELC episode, we are addressing some of the biggest challenges facing engineers today: identifying your scaling thesis, putting that thesis into practice, and addressing implementation challenges. Jaikumar Ganesh, Head of Engineering @ Anyscale, shares insights from his experience working at top tech companies like Android and Uber, and how to apply those lessons within your own orgs. We also cover strategies for identifying what to build, using data effectively when it comes to understanding AI agents, and keeping your intent (and customer success) top of mind. Additionally, Jaikumar discusses his experience as a GM and why all orgs should adopt cross-functional skillsets as part of their company culture. ABOUT JAIKUMAR GANESHJaikumar Ganesh is an accomplished technology leader and the Head of Engineering at Anyscale. With a deep background in engineering and customer-facing roles, Jaikumar has a proven track record of building and scaling engineering organizations. He is passionate about pushing the boundaries of product and engineering innovation while ensuring customer needs are met, and is committed to building empowering organizations rooted in trust, respect, and growth. Jaikumar is excited about working with companies to harness the power of AI and distributed computing to achieve their goals. He previously co-started and co-led Uber's AI group—the central ML group at Uber—and was also on the early team at Android @ Google. This episode is brought to you by Retool!What happens when your team can't keep up with internal tool requests? Teams start building their own, Shadow IT spreads across the org, and six months later you're untangling the mess…Retool gives teams a better way: governed, secure, and no cleanup required.Retool is the leading enterprise AppGen platform, powering how the world's most innovative companies build the tools that run their business. Over 10,000 organizations including Amazon, Stripe, Adobe, Brex, and Orangetheory Fitness use the platform to safely harness AI and their enterprise data to create governed, production-ready apps.Learn more at Retool.com/elc SHOW NOTES:Reflecting on scaling patterns across the 2000s, 2010s, and the AI era (03:27)Why "copy-pasting" scaling strategies from other companies leads to failure (5:56)How to define a scaling thesis by mapping revenue projections to infrastructure strategy (7:52)Infrastructure shifts: From Android's OS abstractions to Uber's on-prem data centers (9:56)The "Build vs. Buy" dilemma in the age of AI agents and third-party solutions (12:09)Why "Knowing What to Build" is the new long pole in engineering productivity (20:17)Developing "Product Thinking" within engineering and infrastructure teams (23:10)The emergence of Context Graphs and "Source of Truth" platforms for AI agents (24:46)How to avoid data & context graphs becoming bottlenecks (27:05)Lessons from GM leadership: Bridging the gap between engineering, product, and sales (29:06)The "6-20" Initiative: Uniting cross-functional teams around specific customer wins (32:45)Training engineers to empathize with customer pain and translate technical wins into the language of sales (33:48)Utilizing cross-departmental daily standups and leaderboards to drive aggressive "block and tackle" execution (36:18)Tracing execution failures back to early decision-making and judgment gaps (38:42)Rapid fire questions (45:28) This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
How are hospitals using AI and HPC to assist them in helping save lives? This week, Technology Now is joined by Keith Perry, Senior Vice President and Chief Information Officer at St. Jude Children's Research Hospital to explore how St Jude uses the latest technologies to help treat and prevent illness and catastrophic disease, giving patients and families more time, and more hope, when it comes to diagnosis.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Sam Jarrell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Keith:https://www.linkedin.com/in/keith-perry-8562347/Sources:Hernigou P. Ambroise Paré III: Paré's contributions to surgical instruments and surgical instruments at the time of Ambroise Paré. Int Orthop. 2013 May;37(5):975-80. doi: 10.1007/s00264-013-1872-y. Epub 2013 Apr 12. PMID: 23580029; PMCID: PMC3631503.https://www.surgicalholdings.co.uk/history-of-surgical-instruments.htmlSmith-Bindman R, Kwan ML, Marlow EC, et al. Trends in Use of Medical Imaging in US Health Care Systems and in Ontario, Canada, 2000-2016. JAMA. 2019;322(9):843–856. doi:10.1001/jama.2019.11456https://caferoentgen.com/2023/10/07/a-tale-of-two-hands-the-story-behind-the-two-famous-radiographs-captured-by-wilhelm-roentgen/https://www.orau.org/health-physics-museum/collection/shoe-fitting-fluoroscope/index.html
This week on The Heavyist Podcast we have review's from Brutal Death Metal from Stabbing and Blackgaze from MØL. We break down the absolute sonic whiplash of Stabbing's massive Century Media debut, Eon of Obscenity, and the cinematic evolution of Denmark's MØL with their new record, Dreamcrush. What a superbly strong new music friday to close out January.
Send us a textIron can be the spark for energy or the fuel for oxidative fire—and most lab reports don't tell you which side you're on. We dig into what really matters: tighter ferritin targets, how genetics and food shape absorption, and why the “normal range” can still mean higher risk for stroke, atherosclerosis, heart failure, and insulin resistance.We start with the fundamentals—heme vs non‑heme iron, why absorption is so uneven, and how early CBC clues like a low MCV can flag deficiency before hemoglobin drops. From there we trace the other side of the U‑curve: iron overload. Hereditary hemochromatosis is more common than many realize and often hides in plain sight until liver enzymes climb, infections recur, or glucose control slips. We connect the dots between elevated ferritin and vascular injury, making sense of the research that links higher stores with stiffer arteries and greater ischemic stroke risk. The biology checks out: unbound iron drives oxidation at the artery lining and feeds pathogens when the immune system is under strain.Practical steps anchor the conversation. If ferritin runs low, we look first for hidden blood loss—ulcers, polyps, or heavy menstruation—then replete with better‑tolerated iron options and supportive meal planning. If ferritin runs high, we outline safe ways to lower stores, from regular blood donation or therapeutic phlebotomy to meal combinations that blunt absorption. We share evidence‑informed “optimal” ranges—women roughly 70–120 ng/mL, men 80–130 ng/mL—and discuss when altitude, lung disease, or inflammation can skew the picture. The result is a clear plan to move from reactive anemia management to proactive iron optimization for energy, heart health, and longevity.Ready to check your ferritin and dial in your range? Listen, share with someone who needs a clearer path, and subscribe for more science‑grounded guidance. If this helped, leave a review and tell us your next step.For video and Powerpoint slide deck: www.thehealthedgepodcast.com
Charlie Langton, Jennifer Hammond and ML are HACKED! Hear all about their troubles with X. STRAIGHT DOPE Soulmates already know, […]
00:00-20:00: ESPN's Kevin Connors goes Off the CHarts with ML and Joe Jr. We chat state of Syracuse basketball, preview SU-UNC, mid-major highs and lows in this crazy era and more. Plus, Kevin reacts to Joe Brady as the new head coach of his beloved Buffalo Bills. Presented by CH Insurance. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
Discover the five scientifically-backed root causes driving autoimmune disorders that traditional medicine overlooks. From vitamin D deficiency and gut damage to hidden infections, antioxidant depletion, and chronic stress—learn the framework for understanding what's really happening in your body and where to start your healing journey. (494 characters) SEO Keyword: root causes of autoimmune disorders FEATURED PRODUCT Zen – Bovine Adrenal Support When your body is battling an autoimmune disorder, your adrenal glands are working overtime to produce cortisol and combat inflammation. Zen features bovine adrenal gland extracts designed to support adrenal function, helping your body manage stress responses and maintain energy levels—critical factors when addressing the chronic stress patterns that contribute to autoimmune development and flare-ups discussed in this episode.
Join us as Neel explores how observability is evolving beyond traditional logs, metrics, and traces into a predictive, AI-powered discipline. Neel walks through the evolution of Observability, demonstrating how OpenTelemetry, machine learning, and LLMs are transforming how we monitor and maintain modern applications. You'll learn about dynamic sampling techniques that reduce costs while maintaining visibility, how ML algorithms detect anomalies before they cause outages, and practical implementations using tools like the OpenTelemetry Collector. This episode covers real-world scenarios from reducing massive log volumes to predicting system failures before they impact customers. Timestamps 0:00 Welcome & Introduction 4:29 Neel's Background & Community Work 5:03 The Evolution of Observability 6:29 The 2 AM Production Incident Scenario 8:13 OpenTelemetry's Role in Modern Observability 12:45 Dynamic Sampling Techniques 18:22 ML & AI in Anomaly Detection 24:16 LLM Observability Explained 28:32 Cost Optimization Strategies 30:04 Context Windows & Token Management 32:00 Self-Healing Systems Discussion 34:15 Edge Cases: When Dynamic Sampling Doesn't Work 36:27 Wrap-up & Resources How to find Neel: https://www.linkedin.com/in/neelcshah/ https://bento.me/neelshah Links from the show: https://neelshah.dev/blogs/observability-2 https://opentelemetry.io/ https://middleware.io/blog/observability-2-0/
00:00-15:00: WGR 550 Buffalo Bills insider Sal Capaccio breaks down the Bills promoting Joe Brady to head coach. Then, ML shares who Brady could bring on as offensive/defensive coordinators. Thanks to Byrne Dairy and CH Insurance. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
WANTED: Developers and STEM experts! Get paid to create benchmarks and improve AI models. Sign up for Alignerr using our link: https://alignerr.com/?referral-source=briankeating One of the most powerful AI systems we've ever built is succeeding for reasons we still don't understand. And worse, they may succeed for reasons that might lock us into the wrong future for humanity. Today's guest is Anil Ananthaswamy, an award-winning science writer and one of the clearest thinkers on the mathematical foundations of machine learning. In this conversation, we're not just talking about new demos, incremental improvements, or updates on new models being released. We're asking even harder questions: Why does the mathematics of machine learning work at all? How do these models succeed when they suffer from problems like overparameterization and lack of training data? And are large language models revealing deep structure, or are they just producing very convincing illusions and causing us to face an increasingly AI-slop-driven future? KEY TAKEAWAYS 00:00 — Book explores why ML works through math 02:47 — Perceptron proof shows simple math guarantees learning 05:11 — Early AI failed due to single-layer limits 07:12 — Nonlinear limits caused the first AI winter 09:04 — Backpropagation revived neural networks 10:59 — GPUs + big data enabled deep learning 15:25 — AI success risks technological lock-in 17:30 — LLMs lack human-like learning and embodiment 22:57 — High-dimensional spaces power ML behavior 27:36 — Data saturation may slow future gains 31:11 — Continual learning is still missing in AI 33:46 — Neuromorphic chips promise energy efficiency 41:49 — Overparameterized models still generalize well 45:05 — SGD succeeds via randomness in complex landscapes 48:27 — Perceptrons remain the core of modern neural net - Additional resources: Anil's NEW Book "Why Machines Learn: The Elegant Math Behind Modern AI": https://www.amazon.com/Why-Machines-Learn-Elegant-Behind/dp/0593185749 Get My NEW Book: Focus Like a Nobel Prize Winner: https://www.amazon.com/dp/B0FN8DH6SX?ref_=pe_93986420_775043100 Please join my mailing list here
ML is impatient for peace in Ukraine, while Marc says he's fine if we snatch Greenland. Disagreement over college football […]