POPULARITY
Categories
Today's clip is from Episode 152 of the podcast, with Daniel Saunders. In this conversation, Daniel Saunders explains how to incorporate risk aversion into Bayesian price optimization. The key insight is that uncertainty around expected profit is asymmetric across price points, low prices yield more predictable (if modest) returns, while high prices introduce much wider uncertainty. Rather than simply maximizing expected profit, you can pass profit through an exponential utility function that models diminishing returns, a well-established idea from economics. This adds an adjustable risk aversion parameter to the optimization: as risk aversion increases, the model shifts toward more conservative price recommendations, trading off potentially large but uncertain gains for outcomes with tighter, more reliable distributions.Get the full discussion here• Join this channel to get access to perks:https://www.patreon.com/c/learnbayesstats• Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302• Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
Welcome to Predictable B2B Success. In this episode, Vinay Koshy interviews John Cousins—investor, tech founder, and educator—whose MBA ASAP program has helped over 30,000 students worldwide. Learn how John turns business theory into practical advice for founders at every level. Hear why John created MBA ASAP, how mental models and curiosity drive founder success, and his approach to simplifying business concepts. Get practical tips on financial literacy, pricing, and common pitfalls for entrepreneurs. Want actionable business advice and new ways to think about B2B success? Listen in for practical strategies you can use now. Some topics we explore in this episode include: John Cousins' Career Path – His trajectory from engineering to business, teaching, writing, and investing.Creation and Purpose of MBA ASAP – Addressing the gap between academic business education and real-world practices.Educational Techniques – Making complex business topics simple and actionable through practical examples.Mental Models – Using frameworks for strategic thinking and decision-making in business.AI and Automation – Impact of AI on business operations, vibe coding, and leveraging tech tools.Decision-Making Processes – Heuristics, Bayesian analysis, and strategies for faster, smarter choices.Financial Literacy – Simplifying accounting concepts and why finance matters for founders.Iterative Market Testing – Applying the “ready, fire, aim” philosophy to test product demand via email and feedback.Pricing and Revenue Strategies – Finding optimal pricing, avoiding underpricing, and scaling revenue.Skill Stacking – Building complementary skills like reading, sales, and negotiation to excel in business communication.And much, much more...
• Support & get perks!• Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com• Intro to Bayes and Advanced Regression courses (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !Chapters:00:00 The Importance of Decision-Making in Data Science06:41 From Philosophy to Bayesian Statistics14:57 The Role of Soft Skills in Data Science18:19 Understanding Decision Theory Workflows22:43 Shifting Focus from Accuracy to Business Value26:23 Leveraging PyTensor for Optimization34:27 Applying Optimal Decision-Making in Industry40:06 Understanding Utility Functions in Regulation41:35 Introduction to Obeisance Decision Theory Workflow42:33 Exploring Price Elasticity and Demand45:54 Optimizing Profit through Bayesian Models51:12 Risk Aversion and Utility Functions57:18 Advanced Risk Management Techniques01:01:08 Practical Applications of Bayesian Decision-Making01:06:54 Future Directions in Bayesian Inference01:10:16 The Quest for Better Inference Algorithms01:15:01 Dinner with a Polymath: Herbert SimonThank you to my Patrons for making this episode possible!Links from the show:Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026! https://www.fieldofplay.co.uk/A Bayesian decision theory workflowDaniel's website, LinkedIn and GitHubLBS #124 State Space Models & Structural Time Series, with Jesse GrabowskiLBS #123 BART & The Future of Bayesian Tools, with Osvaldo MartinLBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian SeyboldtLBS #76 The Past, Present & Future of Stan, with Bob Carpenter
Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
In this episode of The Backstory on the Shroud of Turin, host Guy Powell interviews evangelical apologist and theologian William Red. The two dive deep into Jewish burial customs from the first century and how these practices offer compelling support for the authenticity of the Shroud of Turin.Red details how key figures like Nicodemus and Joseph of Arimathea honored Jesus Christ with kingly burial rites 75 pounds of burial spices and fine linen, just as one would expect for a royal entombment.The conversation doesn't stop at tradition. Red explores modern science and its contributions to the Shroud's authenticity, utilizing odds calculus and Bayesian probability to determine the likelihood of forgery.By considering over 30 lines of evidence, including blood chemistry and textile analysis, Red concludes that the probability of forgery is astronomically low.Whether you're grounded in faith or in data, this conversation challenges your perspective on the most famous burial cloth in history.
2026-02-20 Hosts Craig Lipset, Dr. Amir Kalali, and Jane Myles were joined my Meghana Chalaseni, Mary Thanh-Hai, and Mitch Psotka from the FDA.Today's session focused on Selective Safety Data Collection (SSDC) and how it can streamline clinical trials when a drug's safety profile is already well-established. Mary Thanh Hai and Mitch Psotka explained that SSDC, formalized through FDA guidance and ICH E19, allows for a planned reduction in low-value safety data while maintaining robust monitoring of serious adverse events and other critical outcomes. The panel addressed common misconceptions, emphasizing that SSDC does not lower safety standards or eliminate oversight, it simply focuses on collecting data that meaningfully informs the risk-benefit profile.Meghana Chalasani also highlighted FDA's C3TI demonstration program and other innovative approaches like Bayesian methods and streamlined trials embedded in clinical practice. The discussion closed with practical site-level considerations, including how to integrate SSDC into existing workflows while maintaining consistency and regulatory alignment. Overall, the conversation underscored a shift toward smarter, more efficient trial design without compromising patient safety.You can join TGIF-DTRA Sessions live on LinkedIn Live on Friday's at 12:00 PM ET by checking out our LinkedIn. Follow the Decentralized Trials & Research Alliance (DTRA) on LinkedIn and X. Learn more about Membership options and our work at www.dtra.org.
Whether it be in politics, public health, or corporate finance, why are people more likely to interpret facts or data in a way that fits their preconceived notions about the world as opposed to searching for the fundamental truth? A new paper from the Harvard Business School called, Sharing Models to Interpret Data (by Joshua Schwartzstein and Adi Sunderam)studies the propensity for people to adopt interpretations to data based on their community's beliefs, and why this can lead to less accurate conclusions. Hosts and finance professors Jonathan Berk and Jules van Binsbergen are joined by the paper's co-author Adi Sunderam, who is a professor of corporate finance at Harvard Business School, a research associate at the National Bureau of Economic Research, and a co-editor of the Journal of Finance. The conversation covers the complexity of Bayesian updating and how the process is improperly deployed in today's thinking, not only in corporate decision-making but also on a sociological level. They also discuss Sunderam's model for explaining how people interpret data, why people are more likely to fall into group-belief dynamics, and if there are any interventions that would lead to better decision-making. Read Adi Sunderam and Joshua Schwartzstein's paper: Sharing Models to Interpret Data Find All Else Equal on the web: https://lauder.wharton.upenn.edu/allelse/ All Else Equal: Making Better Decisions Podcast is a production of the UPenn Wharton Lauder Institute through University FM. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Episode 179 hosts Faye Holland and James Parton sit down with Irina Barbina (CEO) and Matthew Griffiths (CTO) to unpick how Concr is using predictive modelling and digital twins to transform cancer drug development.Cancer data is fragmented. Clinical trials, pre-clinical research, and real-world patient data exist in silos. There's no unified way to predict how individual patients will respond to specific therapies, until now.Concr's technology borrows from astrophysics, specifically, how scientists model dark matter using gravitational lensing. The parallel is striking: Astrophysicists can't directly observe dark matter, so they build complex simulations to infer its distribution. Concr can't directly know why a drug worked for a patient, so they build digital twin simulations to predict outcomes.Key innovations:· Bayesian inference at scale to handle messy, incomplete cancer data· Hierarchical modelling that learns from shared biology across cancer types· 94% prediction accuracy on retrospective clinical trial data· Prospective validation underway with NHS partners and pharma companiesConcr dramatically reduces the cost and complexity of clinical trials. This episode brilliantly illustrates why Cambridge is a global innovation hub. It's not just about brilliant science, it's about brilliant people from different disciplines colliding, recognising patterns, and building companies that matter. Hosted on Acast. See acast.com/privacy for more information.
In this episode of Communicable, Navaneeth Narayanan and Josh Nosanchuk invite Virginie Lemiale and Elie Azoulay (Paris, France) as well as fellow editor Emily McDonald (Montreal, Canada)—this time as guest—to discuss adjunctive steroid therapy for pneumocystis pneumonia (PCP) in HIV-negative individuals. In 2025, Lemiale and Azoulay published results from their double-blind, randomised controlled trial investigating steroid treatment for severe Pneumocystis jirovecii pneumonia (PIC trial) in the Lancet Respiratory Medicine [1]. At first glance, one might dismiss the study's clinical impact due the ‘negative' result of the primary outcome, mortality at 28 days, which just missed a statistically significant difference between groups. There was a clinical difference, however, and all other outcomes, including 90-day mortality, were significantly different between groups. Understanding how pivotal these results were to clinical practice, McDonald and colleagues sought to contextualise the results of the PIC trial through a Bayesian analysis in a follow-up publication [2]. While the discussion provides useful clinical commentary, it also helps both to demystify Bayesian analysis and to call attention to what might be lost with strict or overly concrete interpretations of traditional frequentist analyses. This episode was peer reviewed by Arjana Zerja from the Mother Theresa University Hospital Center, Tirana, Albania.ReferencesLemiale V, et al. Adjunctive corticosteroids in non-AIDS patients with severe Pneumocystis jirovecii pneumonia (PIC): a multicentre, double-blind, randomised controlled trial. Lancet Respir Med. 2025;13(9):800-808. doi:10.1016/S2213-2600(25)00125-0.Lee TC, Albuquerque AM, McDonald EG. Contextualizing the use of corticosteroids in severe Pneumocystis jirovecii pneumonia through a Bayesian lens. CMI Commun. 2025;2(4):105141. doi:10.1016/j.cmicom.2025.105141.
Epstein proved the unthinkable: some "conspiracy theories" are horrifyingly real. Join the Heretics Community For Bonus Videos: https://andrewgoldheretics.com/ Join world-renowned skeptic Michael Shermer on Heretics for a gripping, evidence-driven conversation that redefines conspiracy theories. From Jeffrey Epstein's elite blackmail network and secret island to the once-banned COVID lab-leak theory, Bill Gates emails, vaccine controversies, moon-landing doubts, and elite power plays, Shermer uses Bayesian reasoning to distinguish real conspiracies from speculation—showing why even die-hard skeptics must update their views when hard evidence emerges. SPONSORS: Organise your life: https://akiflow.pro/Heretics Earn up to 4 per cent on gold, paid in gold: https://www.monetary-metals.com/heretics/ Cut your wireless bill to 15 bucks a month at https://mintmobile.com/heretics He also tackles the decline of religion, Jordan Peterson's secular appeal, transgender ideology as potential social contagion with growing regret lawsuits, objective morality versus cultural trends, immigration politics, and whether true progress is happening or we're just cycling through extremes. Real cases like Epstein remind us: dismissing everything as paranoia can blind us to what's actually true. Epstein proved conspiracies can be real. Michael's links: https://www.skeptic.com https://michaelshermer.com #ConspiracyTheories #Epstein #MichaelShermer Join the 30k heretics on my mailing list: https://andrewgoldheretics.com Check out my new documentary channel: https://youtube.com/@andrewgoldinvestigates Andrew on X: https://twitter.com/andrewgold_ok Insta: https://www.instagram.com/andrewgold_ok Heretics YouTube channel: https://www.youtube.com/@andrewgoldheretics Chapters: 00:00 Welcome & Heretical Mindset 04:55 Bayesian Thinking & Changing Your Mind 09:50 COVID Origins, Lab Leak & Politicized Science 14:40 Vaccines, Boosters & Precautionary Principle Failures 19:25 Epstein Files, Bill Gates Emails & Elite Blackmail Theories 24:30 Why Epstein Was Real – & What It Means for Other Conspiracies 29:45 Moon Landing Hoax Claims Debunked 34:20 Decline of Religion, Jordan Peterson & Secular Morality 39:55 Transgender Surge, Social Contagion & Regret Lawsuits 44:50 Objective Moral Truths vs Cultural Fashion 49:55 Immigration, Empathy & Political Strategy Conspiracies 54:50 Progress, Backsliding & Hanlon's Razor 59:55 A Heretic Michael admires Learn more about your ad choices. Visit megaphone.fm/adchoices
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
🧭 REBEL Rundown 📌 Key Points 💨 HFNC met criteria for non-inferiority to BPAP for preventing intubation or death within 7 days in four of the five ARF subgroups.🧪 Bayesian dynamic borrowing increased power across subgroups but created variable certainty, especially in smaller groups such as COPD.🫁 The immunocompromised hypoxemia subgroup did not meet non-inferiority, leading to early trial stopping for futility.️ Rescue BPAP use, subgroup-specific exclusion criteria, and non-standardized BPAP delivery are important contextual factors that influence how subgroup results should be interpreted. Click here for Direct Download of the Podcast. 📝 Introduction Bilevel Positive Airway Pressure (BPAP) has long been a foundational modality in the management of acute respiratory failure (ARF), particularly in COPD exacerbations and cardiogenic pulmonary edema, where it can rapidly reduce work of breathing and improve gas exchange. It remains a core tool in our respiratory support arsenal.High-flow nasal cannula (HFNC), however, has expanded what we can offer patients by delivering many of the same physiologic benefits through a far more comfortable interface. With high flows, modest PEEP, and effective dead-space washout, HFNC can improve oxygenation and decrease work of breathing while preserving the ability to talk, cough, eat, and interact with staff and family. This combination of physiologic support and tolerability makes HFNC especially attractive in patients where comfort, anxiety, or cardiovascular stability are key considerations, and in settings where prolonged noninvasive support may be needed. Rather than competing with BPAP, HFNC broadens our options in ARF and allows us to better match the modality to the patient and their underlying disease process.The RENOVATE trial set out to answer a high-impact question across five distinct etiologic groups: Is HFNC non-inferior to BPAP (NIV) for preventing intubation or death in acute respiratory failure? 🧾 Paper Azoulay É, et al. High-Flow Nasal Oxygen vs Noninvasive Ventilation in Patients With Acute Respiratory Failure: The RENOVATE Randomized Clinical Trial. JAMA. 2025 PMID: 39657981 🔙Previously Covered On REBEL: HFNC: Part 1 – How It WorksHFNC: Part 2 – Adult and Pediatric IndicationsFLORALI and AVOID TrialFLORALI-2: NIV vs HFNC as Pre-Oxygenation Prior to IntubationThe Pre-AeRATE Trial – HFNC vs NC for RSI ️ What They Did CLINICAL QUESTION Is HFNC non-inferior to BPAP for rate of endotracheal intubation or death at 7 days in patients with acute respiratory failure due to a variety of causes? STUDY DESIGN Multicenter, randomized non-inferiority trial33 Brazilian hospitalsNov 2019 – Nov 2023Adaptive Bayesian hierarchical modeling with dynamic borrowingOpen label, outcome adjudicators blindedPatients were classified into 5 subgroups SUBGROUPS 1. Non-immunocompromised hypoxemiaSpO₂ < 90% on room air orPaO₂ < 60 mm Hg on room air plusIncreased respiratory effort (accessory muscle use, paradoxical breathing, thoracoabdominal asynchrony) orRespiratory rate > 25 breaths/min2. Immunocompromised hypoxemiaDefined as:Use of immunosuppressive drugs for >3 monthsOR high-dose steroids >0.5 mg/kg/dayOR solid organ transplantOR solid tumors or hematologic malignancies (past 5 years)OR HIV with AIDS / primary immunodeficiency3. COPD exacerbation with acidosisHigh clinical suspicion of COPD as primary diagnosisRR >25 with accessory muscle use, paradoxical breathing, and/or thoracoabdominal asynchronyABG: pH 454. Acute cardiogenic pulmonary edema (ACPE)Sudden onset dyspnea and rales± S3 heart soundNo evidence of aspiration, infection, or pulmonary fibrosisCXR consistent with pulmonary edema5. Hypoxemic COVID-19 (added June 2023)Added due to deviations between expected and observed outcome proportionsAny patient across the other 4 groups with PCR-confirmed SARS-CoV-2 infection in any of the above groups POPULATION Inclusion Criteria:≥18 yrs with ARF* in one of 5 pre-defined subgroups excluding COPD was defined by the following:Hypoxemia with SpO₂
Dave Rubin of "The Rubin Report" talks to Michael Shermer author of "Truth: What It Is, How to Find It, and Why It Still Matters" about his new book on truth, science, and skepticism; the erosion of trust in institutions after COVID; the politicization of science; how to think critically in an age of misinformation, social media, and information overload; how you can sort out truth from lies using evidence-based Bayesian reasoning; the relationship between science, religion, and meaning; the rise of conspiracy theory pushers like Candace Owens and Tucker Carlson; his concerns over AI, deepfakes, and conspiracy theories; and his take on the most recent revelations with UFOs, UAPs, and historical revisionism, and much more.
Send us a textDo Heterogeneous Treatment Effects Exist?For the last 50 years, we've designed cars to be safe...For the 50th-percentile male.Well, that's actually not 100% correct.According to Stanford's report, we introduced "female" crash test dummies in the 1960s, but...They were just scaled-down versions of male dummies and...Represented the 5th percentile of females in terms of body size and mass (aka the smallest 5% of women in the general population).These dummies also did not take into account female-typical injury tolerance, biomechanics, spinal alignment, and more.But...Does it matter for actual safety?In the episode, we cover:- Do heterogeneous treatment effects (different effects in different contexts) exist?- If so, can we actually detect them?- Is it more ethical to look for heterogeneous treatment effects or rather look at global averages?Video version available on the Youtube: https://youtu.be/V801RQTBpp4Recorded on Nov 12, 2025 in Malaga, Spain.------------------------------------------------------------------------------------------------------About RichardProfessor Richard Hahn, PhD, is a professor of statistics at Arizona State University (ASU). He develops novel statistical methods for analyzing data arising from the social sciences, including psychology, economics, education, and business. His current focus revolves around causal inference using regression tree models, as well as foundational issues in Bayesian statistics.Connect with Richard:- Richard on LinkedIn: https://www.linkedin.com/in/richard-hahn-a1096050/About StephenStephen Senn, PhD, is a statistician and consultant who specializes in drug development clinical trials. He is a former Group Head at Ciba-Geigy and has taught at the University of Glasgow and University College London (UCL). He is the author of "Statistical Issues in Drug Development," "Crossover Trials in Clinical Research," and "Dicing with Death."Connect with Stephen:- Stephen on LinkedIn: Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
• Support & get perks!• Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com• Intro to Bayes and Advanced Regression courses (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !Chapters:00:00 Scaling Bayesian Neural Networks04:26 Origin Stories of the Researchers09:46 Research Themes in Bayesian Neural Networks12:05 Making Bayesian Neural Networks Fast16:19 Microcanonical Langevin Sampler Explained22:57 Bottlenecks in Scaling Bayesian Neural Networks29:09 Practical Tools for Bayesian Neural Networks36:48 Trade-offs in Computational Efficiency and Posterior Fidelity40:13 Exploring High Dimensional Gaussians43:03 Practical Applications of Bayesian Deep Ensembles45:20 Comparing Bayesian Neural Networks with Standard Approaches50:03 Identifying Real-World Applications for Bayesian Methods57:44 Future of Bayesian Deep Learning at Scale01:05:56 The Evolution of Bayesian Inference Packages01:10:39 Vision for the Future of Bayesian StatisticsThank you to my Patrons for making this episode possible!Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026!Links from the show:David Rügamer:* Website* Google Scholar* GitHubEmanuel Sommer:* Website* GitHub* Google ScholarJakob Robnik:* Google Scholar* GitHub* Microcanonical Langevin paper* LinkedIn
What makes something truly *intelligent?* Is a rock an agent? Could a perfect simulation of your brain actually *be* you? In this fascinating conversation, Dr. Jeff Beck takes us on a journey through the philosophical and technical foundations of agency, intelligence, and the future of AI.Jeff doesn't hold back on the big questions. He argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock – both execute policies that map inputs to outputs. The real distinction lies in *sophistication* – how complex are the internal computations? Does the system engage in planning and counterfactual reasoning, or is it just a lookup table that happens to give the right answers?*Key topics explored in this conversation:**The Black Box Problem of Agency* – How can we tell if something is truly planning versus just executing a pre-computed response? Jeff explains why this question is nearly impossible to answer from the outside, and why the best we can do is ask which model gives us the simplest explanation.*Energy-Based Models Explained* – A masterclass on how EBMs differ from standard neural networks. The key insight: traditional networks only optimize weights, while energy-based models optimize *both* weights and internal states – a subtle but profound distinction that connects to Bayesian inference.*Why Your Brain Might Have Evolved from Your Nose* – One of the most surprising moments in the conversation. Jeff proposes that the complex, non-smooth nature of olfactory space may have driven the evolution of our associative cortex and planning abilities.*The JEPA Revolution* – A deep dive into Yann LeCun's Joint Embedding Prediction Architecture and why learning in latent space (rather than predicting every pixel) might be the key to more robust AI representations.*AI Safety Without Skynet Fears* – Jeff takes a refreshingly grounded stance on AI risk. He's less worried about rogue superintelligences and more concerned about humans becoming "reward function selectors" – couch potatoes who just approve or reject AI outputs. His proposed solution? Use inverse reinforcement learning to derive AI goals from observed human behavior, then make *small* perturbations rather than naive commands like "end world hunger."Whether you're interested in the philosophy of mind, the technical details of modern machine learning, or just want to understand what makes intelligence *tick,* this conversation delivers insights you won't find anywhere else.---TIMESTAMPS:00:00:00 Geometric Deep Learning & Physical Symmetries00:00:56 Defining Agency: From Rocks to Planning00:05:25 The Black Box Problem & Counterfactuals00:08:45 Simulated Agency vs. Physical Reality00:12:55 Energy-Based Models & Test-Time Training00:17:30 Bayesian Inference & Free Energy00:20:07 JEPA, Latent Space, & Non-Contrastive Learning00:27:07 Evolution of Intelligence & Modular Brains00:34:00 Scientific Discovery & Automated Experimentation00:38:04 AI Safety, Enfeeblement & The Future of Work---REFERENCES:Concept:[00:00:58] Free Energy Principle (FEP)https://en.wikipedia.org/wiki/Free_energy_principle[00:06:00] Monte Carlo Tree Searchhttps://en.wikipedia.org/wiki/Monte_Carlo_tree_searchBook:[00:09:00] The Intentional Stancehttps://mitpress.mit.edu/9780262540537/the-intentional-stance/Paper:[00:13:00] A Tutorial on Energy-Based Learning (LeCun 2006)http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf[00:15:00] Auto-Encoding Variational Bayes (VAE)https://arxiv.org/abs/1312.6114[00:20:15] JEPA (Joint Embedding Prediction Architecture)https://openreview.net/forum?id=BZ5a1r-kVsf[00:22:30] The Wake-Sleep Algorithmhttps://www.cs.toronto.edu/~hinton/absps/ws.pdf---RESCRIPT:https://app.rescript.info/public/share/DJlSbJ_Qx080q315tWaqMWn3PixCQsOcM4Kf1IW9_EoPDF:https://app.rescript.info/api/public/sessions/0efec296b9b6e905/pdf
Another Podcast!This week we talk about Bayesian and the lawsuit from TISG and why it's most likely a big mistake.We also talk about AIS and why lots of people get it wrong who work onboard yachts and ships and are otherwise very Knowledgeable.
Conversion Monthly - The panel kicks off 2026 with predictions on AI-driven creative workflows, agentic shopping behaviours, and the tools reshaping Amazon seller operations. Host: Danny McMillan Panel: Sim Mahon, Dorian Gorski, Matt Kostan Episode Summary The newly rebranded Conversion Monthly show returns with its expert panel to discuss 2026 predictions for Amazon creative optimisation. The conversation covers how AI workflows have evolved since early 2025, with Dorian noting how N8N has become significantly more accessible through built-in AI assistants. Sim shares that his team can now create final, upload-ready main images in a single AI generation. The panel discusses agentic shopping and how AI-driven product discovery may fundamentally change conversion optimisation. Matt highlights the trend toward hyper-specific product positioning, where sellers create separate ASINs for the same product targeting different demographics. Danny introduces Claude's new Co-Work feature as a significant leap that removes technical barriers for sellers wanting to build automations. The panel agrees that "human in the loop" will be the defining phrase of 2026. Sim reveals his investment in 51 Folds, a prediction platform using Bayesian networks. Key Takeaways One-shot main images are now reality - AI image generation has reached the point where final, upload-ready Amazon images can be created in a single prompt Hyper-specific product positioning is trending - creating separate ASINs for the same product targeting different demographics aligns with AI recommendations Technical barriers to automation are evaporating - tools like Claude Co-Work and improved N8N AI assistants are making workflow automation accessible "Human in the loop" defines 2026 - the winning strategy combines automated data collection with human strategic oversight The big three AI providers have stabilised - Anthropic, Google, and OpenAI now dominate, reducing shiny object syndrome Video generation remains the next frontier - while image generation is solved, video still requires scene-by-scene refinement Chapter Markers 00:00 - Introduction and 2026 Outlook 00:58 - Dorian on the Pace of Change Since 2025 04:07 - N8N Accessibility and Self-Build Workflows 05:33 - One-Shot Image Generation Capabilities 07:23 - Video Generation Limitations 10:26 - Business Systems, ClickUp and Future-Proofing 14:37 - Hyper-Specific Product Positioning 20:06 - Keplo 2026 Direction 22:26 - Competitive Advantage and AI Accessibility 25:01 - The Big Three AI Providers 28:46 - 51 Folds Investment and Bayesian Prediction 33:14 - Panel 2026 Priorities 38:12 - Wrap-Up Resources Seller Sessions Website Seller Sessions YouTube Sim Mahon on LinkedIn Dorian Gorski on LinkedIn Matt Kostan on LinkedIn
• Support & get perks!• Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com• Intro to Bayes and Advanced Regression courses (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !Chapters:11:37 The Hard Tech Era21:08 The Shift in Tech Work Culture28:49 AI's Impact on Job Security and Work Dynamics34:33 Adapting to AI: Skills for the Future45:56 Understanding AI Models and Their Limitations47:25 The Importance of Diversity in AI Development54:34 Positioning Technical Talent for Job Security57:58 Building Resilience in Uncertain Times01:06:33 Recognizing Diverse Ambitions in Career Progression01:12:51 The Role of Managers in Employee Retention01:26:55 Solving Complex Problems with AI and InnovationThank you to my Patrons for making this episode possible!Links from the show:Alana's latest book (Use code BAYESIAN for 10% off + a free interview preparation download PDF)Alana's SubstackAlana on LinkedinAlana on InstagramThe Obstacle Is the Way – The Timeless Art of Turning Trials into TriumphCourage Is Calling – Fortune Favours the Brave
Today's clip is from episode 148 of the podcast, with Scott Berry. In this conversation, Alex and Scott discuss emphasizing the shift from frequentist to Bayesian approaches in clinical trials. They highlight the limitations of traditional trial designs and the advantages of adaptive and platform trials, particularly in the context of COVID-19 treatment. The discussion provides insights into the complexities of trial design and the innovative methodologies that are shaping the future of medical research. Get the full discussion here!• Join this channel to get access to perks: https://www.patreon.com/c/learnbayesstats • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302 • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
"Today's the day." ~ Mel Fisher Why does a reinsurance company's annual letter get me this fired up? Ross Stevens weaves together treasure hunting, Bayesian statistics, and the raw reality of Bitcoin as a human rights tool - culminating in a voice message from Nobel Prize winner Maria Karina Machado that moved him to tears. In my rant, I unpack why optimism isn't woo-woo, it's basic logic - and why waiting for certainty before believing in your own success is a guaranteed path to failure. Check out the original article: Stone Ridge 2025 Investor Letter (Link: https://www.nydig.com/research/stone-ridge-2025-investor-letter) References from the episode The Adaptability Quotient by Alec Litowitz - The forthcoming field guide for success mentioned in the letter. Wired for Story and Story Genius by Lisa Cron - My favorite books on what storytelling actually is. (Hint: it's not the plot!) (Link: http://wiredforstory.com/books) Fascinate: How to Make your Brand Impossible to Resist by Sally Hogshead - Fantastic on branding and narrative. (Link: https://a.co/d/5H1FJUy) Save the Cat by Blake Snyder - Useful for story pacing, but to be paired with Kill the Dog for balance.(Link: https://a.co/d/a1rp6Lc) Longitude by Dava Sobel (Link: https://en.wikipedia.org/wiki/Longitude_(book)) The Human Rights Foundation (Link: https://hrf.org/) HRF Weekly Financial Freedom Reports (Link: https://hrf.org/newsletters/) Host Links Guy on Nostr (Link: http://tinyurl.com/2xc96ney) Guy on X (Link: https://twitter.com/theguyswann) Guy on Instagram (Link: https://www.instagram.com/theguyswann) Guy on TikTok (Link: https://www.tiktok.com/@theguyswann) Guy on YouTube (Link: https://www.youtube.com/@theguyswann) Bitcoin Audible on X (Link: https://twitter.com/BitcoinAudible) The Guy Swann Network Broadcast Room on Keet (Link: https://tinyurl.com/3na6v839) Check out our awesome sponsors! Ledn: Need fiat but don't want to sell your Bitcoin? Ledn offers secure, Bitcoin-backed loans with no credit checks, flexible repayment, and fast turnaround—often within 24 hours. With $10B+ in loans across 100+ countries and transparent Proof of Reserves, Ledn is a trusted option for unlocking liquidity witho...
Eric Bradlow, Shane Jensen, and Adi Wyner examine how AI-driven metrics, model calibration, and Bayesian approaches inform quarterback evaluation, team upsets, and the evolving limits of sports analytics. Hosted on Acast. See acast.com/privacy for more information.
Dr. Jeff Beck, mathematician turned computational neuroscientist, joins us for a fascinating deep dive into why the future of AI might look less like ChatGPT and more like your own brain.**SPONSOR MESSAGES START**—Prolific - Quality data. From real people. For faster breakthroughs.https://www.prolific.com/?utm_source=mlst—**END***What if the key to building truly intelligent machines isn't bigger models, but smarter ones?*In this conversation, Jeff makes a compelling case that we've been building AI backwards. While the tech industry races to scale up transformers and language models, Jeff argues we're missing something fundamental: the brain doesn't work like a giant prediction engine. It works like a scientist, constantly testing hypotheses about a world made of *objects* that interact through *forces* — not pixels and tokens.*The Bayesian Brain* — Jeff explains how your brain is essentially running the scientific method on autopilot. When you combine what you see with what you hear, you're doing optimal Bayesian inference without even knowing it. This isn't just philosophy — it's backed by decades of behavioral experiments showing humans are surprisingly efficient at handling uncertainty.*AutoGrad Changed Everything* — Forget transformers for a moment. Jeff argues the real hero of the AI boom was automatic differentiation, which turned AI from a math problem into an engineering problem. But in the process, we lost sight of what actually makes intelligence work.*The Cat in the Warehouse Problem* — Here's where it gets practical. Imagine a warehouse robot that's never seen a cat. Current AI would either crash or make something up. Jeff's approach? Build models that *know what they don't know*, can phone a friend to download new object models on the fly, and keep learning continuously. It's like giving robots the ability to say "wait, what IS that?" instead of confidently being wrong.*Why Language is a Terrible Model for Thought* — In a provocative twist, Jeff argues that grounding AI in language (like we do with LLMs) is fundamentally misguided. Self-report is the least reliable data in psychology — people routinely explain their own behavior incorrectly. We should be grounding AI in physics, not words.*The Future is Lots of Little Models* — Instead of one massive neural network, Jeff envisions AI systems built like video game engines: thousands of small, modular object models that can be combined, swapped, and updated independently. It's more efficient, more flexible, and much closer to how we actually think.Rescript: https://app.rescript.info/public/share/D-b494t8DIV-KRGYONJghvg-aelMmxSDjKthjGdYqsE---TIMESTAMPS:00:00:00 Introduction & The Bayesian Brain00:01:25 Bayesian Inference & Information Processing00:05:17 The Brain Metaphor: From Levers to Computers00:10:13 Micro vs. Macro Causation & Instrumentalism00:16:59 The Active Inference Community & AutoGrad00:22:54 Object-Centered Models & The Grounding Problem00:35:50 Scaling Bayesian Inference & Architecture Design00:48:05 The Cat in the Warehouse: Solving Generalization00:58:17 Alignment via Belief Exchange01:05:24 Deception, Emergence & Cellular Automata---REFERENCES:Paper:[00:00:24] Zoubin Ghahramani (Google DeepMind)https://pmc.ncbi.nlm.nih.gov/articles/PMC3538441/pdf/rsta201[00:19:20] Mamba: Linear-Time Sequence Modelinghttps://arxiv.org/abs/2312.00752[00:27:36] xLSTM: Extended Long Short-Term Memoryhttps://arxiv.org/abs/2405.04517[00:41:12] 3D Gaussian Splattinghttps://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/[01:07:09] Lenia: Biology of Artificial Lifehttps://arxiv.org/abs/1812.05433[01:08:20] Growing Neural Cellular Automatahttps://distill.pub/2020/growing-ca/[01:14:05] DreamCoderhttps://arxiv.org/abs/2006.08381[01:14:58] The Genomic Bottleneckhttps://www.nature.com/articles/s41467-019-11786-6Person:[00:16:42] Karl Friston (UCL)https://www.youtube.com/watch?v=PNYWi996Beg
• Support & get perks!• Proudly sponsored by PyMC Labs. Get in touch and tell them you come from LBS!• Intro to Bayes and Advanced Regression courses (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !Chapters:13:16 Understanding Adaptive and Platform Trials25:25 Real-World Applications and Innovations in Trials34:11 Challenges in Implementing Bayesian Adaptive Trials42:09 The Birth of a Simulation Tool44:10 The Importance of Simulated Data48:36 Lessons from High-Stakes Trials52:53 Navigating Adaptive Trial Designs56:55 Communicating Complexity to Stakeholders01:02:29 The Future of Clinical Trials01:10:24 Skills for the Next Generation of StatisticiansThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Giuliano Cruz, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli, Guillaume Berthon, Avenicio Baca, Spencer Boucher, Krzysztof Lechowski, Danimal, Jácint Juhász, Sander and Philippe.Links from the show:Berry ConsultantsScott's podcastLBS #45 Biostats & Clinical Trial Design, with Frank Harrell
Send us a text*What can we learn about causal inference from the “war” between Bayesians and frequentists?*What can we learn about causal inference from the “war” between Bayesians and frequentists?In the episode, we cover:- What can we learn from the “war” between Bayesians and frequentists?- Why do Bayesian Additive Regression Trees (BART) “just work”?- Do heterogeneous treatment effects exist?- Is RCT generalization a heterogeneity problem?In the episode, we accidentally coined a new term: “feature-level selection bias.”------------------------------------------------------------------------------------------------------Video version available on the Youtube: https://youtu.be/-hRS8eU3TowRecorded in Arizona, US.------------------------------------------------------------------------------------------------------*About The Guest*Professor Richard Hahn, PhD, is a professor of statistics at Arizona State University (ASU). He develops novel statistical methods for analyzing data arising from the social sciences, including psychology, economics, education, and business. His current focus revolves around causal inference using regression tree models, as well as foundational issues in Bayesian statistics.Connect with Richard:- Richard on LinkedIn: https://www.linkedin.com/in/richard-hahn-a1096050/- Richard's web page: https://methodologymatters.substack.com/about*About The Host*Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4 ).Connect with Alex:- Alex on the Internet: https://bit.ly/aleksander-molak*Links*Repo- https://stochtree.aiPapers- Hahn et al (2020) - "Bayesian Regression Tree Models for Causal Inference" (https://projecteuclid.org/journals/bayesian-analysis/volume-15/issue-3/Bayesian-Regression-Tree-Models-for-Causal-Inference--Regularization-Confounding/10.1214/19-BA1195.full)- Yeager, ..., Dweck et al (2019) - "A national experiment reveals where a growth mindset improves achievement" (https://www.nature.com/articles/s41586-019-1466-y)- Herren, Hahn, et al (20Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Aaron Schatz, Chief Analytics Officer at FTN Fantasy and founder of Football Outsiders, joins Eric Bradlow to explore how DVOA and play-by-play analytics challenge conventional narratives about the 2025 NFL season, conference strength, playoff probabilities, and the growing influence of data in awards voting. Plus, Eric walks through real-world examples of using generative AI to forecast NBA win totals, college football playoff probabilities, and NFL Super Bowl odds, highlighting how modern models apply Bayesian reasoning, betting markets, and simulation-based analytics. Hosted on Acast. See acast.com/privacy for more information.
Today's clip is from episode 147 of the podcast, with Martin Ingram.Alex and Martin discuss the intricacies of variational inference, particularly focusing on the ADVI method and its challenges. They explore the evolution of approximate inference methods, the significance of mean field variational inference, and the innovative linear response technique for covariance estimation. The discussion also delves into the trade-offs between stochastic and deterministic optimization techniques, providing insights into their implications for Bayesian statistics.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Mer de Sicile, août 2024. Un super-yacht nommé Bayesian navigue dans la nuit quand une cellule orageuse éclate à l'horizon. À bord, Mike Lynch, le « Bill Gates britannique », qui vient tout juste de sortir d'un long bras de fer judiciaire avec les États-Unis. Quelques heures plus tard, le silence. Un naufrage éclair, qui soulève plus de questions qu'il n'apporte de réponses.Pour comprendre comment on s'est rendu là, on remonte la trajectoire d'un génie façonné par Cambridge, pis par une obsession : le théorème de Bayes. De Cambridge Neurodynamics à Autonomy, des négos avec HP aux débuts de Darktrace, Lynch a bâti des machines capables de faire parler le chaos : courriels, images, empreintes, signaux faibles. Entre l'État, le renseignement pis le monde des affaires, il a navigué dans une zone grise où les algorithmes ont parfois plus de poids que les mots. C'est ça, notre angle Distorsion : des histoires étranges de l'ère numérique, là où la donnée finit par ressembler à un destin.Au cœur de l'épisode, il y a une séquence troublante de 48 heures, une tempête « statistiquement improbable », pis une enquête italienne qui hésite entre météo extrême, négligence humaine… et d'autres pistes que certains aimeraient peut-être mieux pas trop brasser. Accident, opération discrète ou juste une bad luck poussée au maximum? On recolle les morceaux sans virer complotiste, on écoute les témoins, on suit les chiffres… pis on laisse une place à ce frisson qu'aucun modèle peut vraiment prévoir. Bonne écoute.Découvrez l'exposition immersive Sherlock Holmes : Menez l'enquête présentée par Pointe-à-Callière à Montréal ! Achetez vos billets dès maintenant juste ici : https://bit.ly/expo-distorsionnordvpn.com/distorsion, Rabais exclusif sur ton abonnement + plus 4 mois gratuits! ÉrosEt Compagnie : 15% de rabais avec le code DistorsionPatreonSite WebBoutique Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
You've read about how this groundbreaking trial on ketamine vs etomidate for RSI "Changes Everything!" on the socials. Or perhaps "it's horribly biased and unnecessary... we're already knew all this!". Why? Well.. social media. Listen in as Dr Jarvis discusses not just this trial, but what the evidence landscape was before it was released. Why was it done, how was it done, what does it show, and how can we integrate it into our practice?Citations:1. Casey JD, Seitz KP, Driver BE, et al. Ketamine or Etomidate for Tracheal Intubation of Critically Ill Adults. N Engl J Med. Published online December 9, 2025.2. Jabre P, Combes X, Lapostolle F, et al. Etomidate versus ketamine for rapid sequence intubation in acutely ill patients: a multicentre randomised controlled trial. Lancet. 2009;374(9686):293-300. 3. Matchett G, Gasanova I, Riccio CA, et al. Etomidate versus ketamine for emergency endotracheal intubation: a randomized clinical trial. Intensive Care Med. 2022;48(1):78-91. 4. Koroki T, Kotani Y, Yaguchi T, et al. Ketamine versus etomidate as an induction agent for tracheal intubation in critically ill adults: a Bayesian meta-analysis. Crit Care. 2024;28(1):48. 5. Yeh RW, Valsdottir LR, Yeh MW, et al. Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial. BMJ. 2018;363:k5094. doi:10.1136/bmj.k5094
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:DADVI is a new approach to variational inference that aims to improve speed and accuracy.DADVI allows for faster Bayesian inference without sacrificing model flexibility.Linear response can help recover covariance estimates from mean estimates.DADVI performs well in mixed models and hierarchical structures.Normalizing flows present an interesting avenue for enhancing variational inference.DADVI can handle large datasets effectively, improving predictive performance.Future enhancements for DADVI may include GPU support and linear response integration.Chapters:13:17 Understanding DADVI: A New Approach21:54 Mean Field Variational Inference Explained26:38 Linear Response and Covariance Estimation31:21 Deterministic vs Stochastic Optimization in DADVI35:00 Understanding DADVI and Its Optimization Landscape37:59 Theoretical Insights and Practical Applications of DADVI42:12 Comparative Performance of DADVI in Real Applications45:03 Challenges and Effectiveness of DADVI in Various Models48:51 Exploring Future Directions for Variational Inference53:04 Final Thoughts and Advice for PractitionersThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël...
What's the optimal amount and type of exercise to improve symptoms of Major Depressive Disorder?A new Bayesian network meta-analysis may have the most straightforward answer yet. In this episode, I break down a comprehensive review comparing four primary exercise modalities: aerobic, resistance, mind–body, and mixed training, and their impact on clinically diagnosed MDD.We explore:• The U-shaped dose–response curve• The minimum clinically effective dose (~320 MET-min/week)• The optimal dose (~860 MET-min/week)• Why mind–body training works at a lower volume• How METs standardise intensity across exercise types• How to build an evidence-aligned movement plan when motivation and energy are lowThis is a practical, grounded, science-backed guide to using exercise as one part of a broader approach to healing depression.
In this episode, we explore the foundations and evolution of decision theory. Our guest, Itzhak Gilboa, begins with a brief historical overview of how the field has developed over time. We naturally discuss maximising expected utility, Bayesian decision theory, and Savage's representation theorem. Itzhak then delves into critiques of the Bayesian approach, especially concerning its interpretation of what constitutes a "rational decision maker." He presents a range of alternative decision frameworks, including approaches that do not require individuals to specify a full subjective probability distribution. Itzhak Gilboa is Professor of Economics and Decision Sciences at HEC Paris.
Today's clip is from episode 146 of the podcast, with Ethan Smith.Alex and Ethan discuss the application of Bayesian inference in high energy density physics, particularly in analyzing complex data sets. They highlight the advantages of Bayesian techniques, such as incorporating prior knowledge and managing uncertainties. They also shares insights from an ongoing experimental project focused on measuring the equation of state of plasma at extreme pressures. Finally, Alex and Ethan advocate for best practices in managing large codebases and ensuring model reliability.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Ethan's research involves using lasers to compress matter to extreme conditions to study astrophysical phenomena.Bayesian inference is a key tool in analyzing complex data from high energy density experiments.The future of high energy density physics lies in developing new diagnostic technologies and increasing experimental scale.High energy density physics can provide insights into planetary science and astrophysics.Emerging technologies in diagnostics are set to revolutionize the field.Ethan's dream project involves exploring picno nuclear fusion.Chapters:14:31 Understanding High Energy Density Physics and Plasma Spectroscopy21:24 Challenges in Data Analysis and Experimentation36:11 The Role of Bayesian Inference in High Energy Density Physics47:17 Transitioning to Advanced Sampling Techniques51:35 Best Practices in Model Development55:30 Evaluating Model Performance01:02:10 The Role of High Energy Density Physics01:11:15 Innovations in Diagnostic Technologies01:22:51 Future Directions in Experimental Physics01:26:08 Advice for Aspiring ScientistsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady,
What do you expect from running a fire test? I would hope that it improves my state of knowledge. But do they do this? We often pursue them blindly, but it seems there is a way to do this in an informed way. In this episode we explore a rigorous, practical way to select and design experiments by asking a sharper question: which test delivers the most decision-changing information for the least cost, time, and impact. With Dr. Andrea Franchini of Ghent University, we unpack a Bayesian framework that simulates possible outcomes before you touch a sample, updates your state of knowledge, and quantifies the utility of that update as uncertainty reduction, economic value, or environmental benefit.First, we reframe testing around information gain. Starting from a prior distribution for the parameter you care about, we model candidate experiments and compute how each would shift the posterior. The gap between prior and posterior is the signal; diminishing returns tell you when to stop. In the cone calorimeter case on PMMA ignition time, early trials yield large gains, then the curve flattens, revealing a rational stopping point and a transparent way to plan sample counts and budgets. The same structure scales from simple statistical models to high-fidelity or surrogate models when physics and geometry matter.Then we tackle a post-fire decision with real financial stakes: repair a reinforced concrete slab, or accept residual risk. We connect Eurocode-based thermal analysis to two test options—rebound hammer temperature proxies and discoloration depth—and compute their value of information. By translating updated probabilities of exceeding 600°C into expected costs of repair versus undetected failure, we show how to choose the test that pays back the most. In the studied scenario, the rebound hammer provides higher value, even after accounting for testing costs, but the framework adapts to different buildings, cost ratios, and risk appetites.Beyond pass-fail, this approach helps optimize sensor layouts, justify added instrumentation, and balance multiple objectives—uncertainty, money, and environmental impact—without slipping into guesswork. If you're ready to move from ritual testing to evidence that changes outcomes, this conversation maps the path. Papers to read after this:Which test is the best? Choosing the fire test that maximizes the information gainQuantifying the expected utility of fire tests and experiments before execution----The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.
Before you listen to this episode, can you quantify how useful you expect it to be? That's a prior! And "priors" is a word that gets used a lot in this discussion with Michael Kaminsky as we try to demystify the world of Bayesian statistics. Luckily, you can just listen to the episode once and then update your expectation—no need to simulate listening to the show a few thousand times or crunch any numbers whatsoever. The most important takeaway is that you'll know you've achieved Bayesian clarity when you come to realize that human beings are naturally Bayesian, and the underlying principles behind Bayesian statistics are inherently intuitive. This episode's Measurement Bite from show sponsor Recast is a brief explanation of statistical significance (and why shorthanding it is problematic…and why confidence intervals are generally more practically useful in business than p-values) from Michael Kaminsky! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Today's clip is from episode 145 of the podcast, with Jordan Thibodeau.Alexandre Andorra and Jordan Thibodeau discuss the transformative impact of AI on productivity, career opportunities in the tech industry, and the intricacies of the job interview process. They emphasize the importance of expertise, networking, and the evolving landscape of tech companies, while also providing actionable advice for individuals looking to enhance their careers in AI and related fields.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Episode 39 - Mary Beth Feuling - From Data to Impact: Advancing Pediatric Nutrition in Hospital SystemsIn this episode of Nutrition Pearls: the Podcast, co-hosts Nikki Misner and Bailey Koch speak with Mary Beth Feuling on how pediatric dietitians can use data, EMRs, and innovative tools to improve patient care and malnutrition identification at a systems level. Mary Beth is a pediatric nutrition specialist with expertise in nutrition informatics, quality, and research, dedicated to advancing care for children across all ages. Over the past thirteen years as an advanced practice dietitian, she has led quality improvement and research initiatives at Children's Wisconsin, mentoring dietitians to develop their research skills and improving pediatric nutrition care locally, nationally, and internationally.Nutrition Pearls is supported by an educational grant from Mead Johnson Nutrition.Resources:Feuling MB, Hilbrands J, Hettich K, et al. Registered Dietitian Nutritionist consultation is associated with improvement in nutritional status in chronically ill children: a retrospective cohort study. J Acad Nutr Diet. 2025;(Epub ahead of print). doi:10.1016/j.jand.2025.05.011.Hilbrands J, Feuling MB, Szabo A, et al. Nutrition screening in the pediatric intensive care unit: evaluation of an electronic medical record–based tool. Nutrients. 2023;15(21):4591. doi:10.3390/nu15214591.Umentum B, Kim HJ, Adkins A, et al. Are dietitian recommendations followed? A descriptive study of pediatric hospitalized and ambulatory patients. J Hum Nutr Diet. 2024;(Epub ahead of print). doi:10.1111/jhn.13291.Crouse J, Feuling MB, Winter T, Goday PS, Smith A. Electronic health record time-tracking provides real-time data to measure and benchmark dietitian productivity. J Hum Nutr Diet. 2024;37(1):105-110. doi:10.1111/jhn.13236.Hilbrands J, Feuling MB, Szabo A, et al. Evaluation of an electronic medical record–based pediatric nutrition screening tool. J Hum Nutr Diet. 2023;36(5):1912-1921. doi:10.1111/jhn.13177.Sparapani RA, Teng BQ, Hilbrands J, Pipkorn R, Beth FM, Goday PS. Novel pediatric height outlier detection methodology for electronic health records via machine learning with monotonic Bayesian additive regression trees. J Pediatr Gastroenterol Nutr. 2022;75(2):210-214.Rusnak S, Charney P. Position of the Academy of Nutrition and Dietetics: Nutrition informatics. J Acad Nutr Diet. 2019;119(8):1375-1382. doi:10.1016/j.jand.2019.06.004.Academy of Nutrition and Dietetics Quality Management Academy of Nutrition and Dietetics Malnutrition Care Score Nutrition Care Process Interoperability and Health Information StandardsChildren's Hospitals Solutions for Patient SafetySix Domains of Health Care Quality Produced by: Corey IrwinNASPGHAN - Council for Pediatric Nutrition Professionalscpnp@naspghan.org
If the phrase "Bayesian calculus" makes you run for the hills, you're not alone! Bayesian logic can sound intimidating at first, but if you give it a little time, you'll understand how useful it can be to evaluate the evidence for design in the natural world. On this ID The Future, Dr. Jonathan McLatchie gives us a beginner's guide to Bayesian thinking and teaches us how it can be used to build a strong cumulative case for intelligent design, as well as how we can use it in our everyday lives. Enjoying the podcast? Leave a written review at Apple Podcasts to help new listeners find the show! Source
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Guillaume Berthon.Takeaways:AI is reshaping the workplace, but we're still in early stages.Networking is crucial for job applications in top firms.AI tools can augment work but are not replacements for skilled labor.Understanding the tech landscape requires continuous learning.Timing and cultural readiness are key for tech innovations.Expertise can be gained without formal education.Bayesian statistics is a valuable skill for tech professionals.The importance of personal branding in the job market. You just need to know 1% more than the person you're talking to.Sharing knowledge can elevate your status within a company.Embracing chaos in tech can create new opportunities.Investing in people leads...
AP of 50+ years, host of Gambling with an Edge and author of Gambling Wizards - Richard Munchkin (https://x.com/RWM21) joins the show. From his start as an AP in old Las Vegas to the end where he shares the most...interesting(?) Bayesian update, this episode is a ride. Richard's story telling is unmatched in the industry - and he has a career of professional gambling to draw content from. We talk about his early days, and what he's doing now with regards to sports betting and other casino games. Can't miss episode!0:00 Intro2:45 How Richard Started w/ AP play12:00 Lost Skill of Being Willing to Battle22:00 Best Disguise Stories28:15 Networking & Working w/ Teams33:15 Traveling the World as an AP46:45 Getting the Money Out is King56:30 How to Scout Games1:18:00 Why Richard Does Content1:39:00 Q&AWelcome to The Risk Takers Podcast, hosted by professional sports bettor John Shilling (GoldenPants13) and SportsProjections. This podcast is the best betting education available - PERIOD. And it's free - please share and subscribe if you like it.My website: https://www.goldenpants.com/ Follow SportsProjections on Twitter: https://x.com/Sports__ProjWant to work with my betting group?: john@goldenpants.comWant 100s of +EV picks a day?: https://www.goldenpants.com/gp-picks
On the Overthinking It Podcast, we tackle what happens when majority rule is the worst option available. Episode 905: Bayesian Democracy originally appeared on Overthinking It, the site subjecting the popular culture to a level of scrutiny it probably doesn't deserve. [Latest Posts | Podcast (iTunes Link)]
Today's clip is from episode 144 of the podcast, with Maurizio Filippone.In this conversation, Alex and Maurizio delve into the intricacies of Gaussian processes and their deep learning counterparts. They explain the foundational concepts of Gaussian processes, the transition to deep Gaussian processes, and the advantages they offer in modeling complex data. The discussion also touches on practical applications, model selection, and the evolving landscape of machine learning, particularly in relation to transfer learning and the integration of deep learning techniques with Gaussian processes.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Sign up for Alex's first live cohort, about Hierarchical Model building!Get 25% off "Building AI Applications for Data Scientists and Software Engineers"Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Why GPs still matter: Gaussian Processes remain a go-to for function estimation, active learning, and experimental design – especially when calibrated uncertainty is non-negotiable.Scaling GP inference: Variational methods with inducing points (as in GPflow) make GPs practical on larger datasets without throwing away principled Bayes.MCMC in practice: Clever parameterizations and gradient-based samplers tighten mixing and efficiency; use MCMC when you need gold-standard posteriors.Bayesian deep learning, pragmatically: Stochastic-gradient training and approximate posteriors bring Bayesian ideas to neural networks at scale.Uncertainty that ships: Monte Carlo dropout and related tricks provide fast, usable uncertainty – even if they're approximations.Model complexity ≠ model quality: Understanding capacity, priors, and inductive bias is key to getting trustworthy predictions.Deep Gaussian Processes: Layered GPs offer flexibility for complex functions, with clear trade-offs in interpretability and compute.Generative models through a Bayesian lens: GANs and friends benefit from explicit priors and uncertainty – useful for safety and downstream decisions.Tooling that matters: Frameworks like GPflow lower the friction from idea to implementation, encouraging reproducible, well-tested modeling.Where we're headed: The future of ML is uncertainty-aware by default – integrating UQ tightly into optimization, design, and deployment.Chapters:08:44 Function Estimation and Bayesian Deep Learning10:41 Understanding Deep Gaussian Processes25:17 Choosing Between Deep GPs and Neural Networks32:01 Interpretability and Practical Tools for GPs43:52 Variational Methods in Gaussian Processes54:44 Deep Neural Networks and Bayesian Inference01:06:13 The Future of Bayesian Deep Learning01:12:28 Advice for Aspiring Researchers
Michael Shermer sits down with Charles Murray (author of The Bell Curve, Coming Apart, and now Taking Religion Seriously) for a riveting 100-minute conversation about Murray's late-life turn from Harvard-bred agnosticism (“Smart people don't believe that stuff anymore”) to Bayesian theism (“I put the afterlife at just over 50%”). This wide-ranging discussion explores the evidence for the existence of God and the afterlife, the problem of evil, and the historical growth of Christianity. They also delve into topics such as the nature of consciousness, terminal lucidity, and even evolutionary vs. religious perspectives on love. A thought-provoking exploration for skeptics, seekers, and anyone wondering whether the universe has a purpose. Charles Murray is a policy analyst educated at Harvard and MIT and currently serves as the Hayek Emeritus Scholar at the American Enterprise Institute. He is the author of several influential books, including the controversial The Bell Curve, Coming Apart, and Facing Reality. His most recent book is Taking Religion Seriously.
In this third installment of the “Horse Series,” David sits down with Dr. Carlton Shield Chief Gover to explore the intersections of Indigenous oral traditions, radiocarbon dating, and the archaeology of horses across the Great Plains and the Caribbean.Carlton shares how Pawnee oral traditions align with archaeological evidence, revealing new insights into the transitions from hunter-gatherer to agricultural societies. The conversation expands into how the reintroduction of horses revolutionized Plains warfare, movement, and culture — transforming not just how people traveled, but how they defined bravery, honor, and trade.The episode then dives underwater — literally — as Carlton recounts his work with the Indiana University Underwater Science Program in the Dominican Republic. From Spanish shipwrecks to 400-year-old hazelnuts used to fight scurvy, the discussion highlights how horses, colonization, and trade converged across continents and oceans.Topics CoveredIntroduction to Carlton Shield Chief Gover's background and Pawnee heritageMerging radiocarbon dating with Indigenous oral historiesThe importance of corn, maize agriculture, and Plains village lifeHow the horse transformed Indigenous cultures and warfareThe practice of “counting coup” and individual honor in combatThe spread of horses before European contactCarlton's archaeological work in Ukraine and comparisons to the Great PlainsUnderwater archaeology in the Dominican RepublicSpanish shipwrecks, horseshoes, and gold-gilded stirrupsHazelnuts as a 16th-century Spanish cure for scurvyDangers and logistics of underwater fieldworkHow early Caribbean horses may connect genetically to modern mustangsThe future of Plains and underwater archaeologyAbout the GuestDr. Carlton Shield Chief Gover is a citizen of the Pawnee Nation and a leading voice in Indigenous and Plains archaeology. His research integrates oral histories, Bayesian radiocarbon analysis, and archaeological evidence to create a fuller understanding of the Great Plains' deep past. He currently serves as Assistant Professor and Curator of Archaeology at the University of Kansas and hosts The Great Plains Archaeology Podcast.Follow Carlton on InstagramListen to The Great Plains Archaeology PodcastMentioned in This EpisodeHoof Beats: The Horse in Human History — Dr. William TaylorCassidy Thornhill's work on the Blacks Fork HorseYvette and Paulette Steeves' research on pre-contact horsesIndiana University Underwater Science Program (Dr. Charles Beeker)University of Kansas Natural History MuseumKey Quote“When you reanalyze radiocarbon data with Indigenous oral traditions, you actually illustrate a much more holistic picture of human history.” — Dr. Carlton Shield Chief GoverTranscriptsFor a rough transcript head over to: https://www.archaeologypodcastnetwork.com/ethnocynology/26Links:davidianhowe.comDavidianhowe.com/storeArchPodNetAPN Website: https://www.archpodnet.comAPN on Facebook: https://www.facebook.com/archpodnetAPN on Twitter: https://www.twitter.com/archpodnetAPN on Instagram: https://www.instagram.com/archpodnetAPN ShopAffiliatesMotion Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Sign up for Alex's first live cohort, about Hierarchical Model buildingSoccer Factor Model DashboardToday's clip is from episode 143 of the podcast, with Christoph Bamberg.Christoph shares his journey into Bayesian statistics and computational modeling, the challenges faced in academia, and the technical tools used in research. Alex and Christoph delve into a specific study on appetite regulation and cognitive performance, exploring the implications of framing in psychological research and the importance of careful communication in health-related contexts.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Another knock against the antiplatelet/anticoagulant combo, polypills in HF, the physical exam of the future, and the problem of underpowered trials that even Bayesian analyses cannot rescue are the topics John Mandrola, MD, discusses in this week's podcast. This podcast is intended for healthcare professionals only. To read a partial transcript or to comment, visit: https://www.medscape.com/twic I Listener Feedback Trends Study https://www.heartrhythmjournal.com/article/S1547-5271(11)00496-6/fulltext II Another knock against the Antiplatelet/Anticoagulation combination “Antiplatelet Plus Oral Anticoagulant Lowers Stroke, Raises Bleeding Risk” https://www.medscape.com/viewarticle/antiplatelet-plus-oral-anticoagulant-lowers-stroke-raises-2025a1000re0 ATIS-NVAF Trial https://jamanetwork.com/journals/jamaneurology/fullarticle/2839511 AQUATIC trial https://www.nejm.org/doi/abs/10.1056/NEJMoa2507532 III Polypill for HFrEF A Multilevel Polypill for Patients With HFrEF https://www.jacc.org/doi/10.1016/j.jacadv.2025.102195 IV The Physical Exam of the Future Point-of-Care Ultrasound https://doi.org/10.1016/j.jchf.2025.102707 V More on Underpowered Trials – GA vs Moderate Sedation in IV stroke SEGA Trial https://jamanetwork.com/journals/jamaneurology/fullarticle/2839838 Bayesian Analyses of CV Trials https://doi.org/10.1016/j.cjca.2021.03.014 You may also like: The Bob Harrington Show with the Stephen and Suzanne Weiss Dean of Weill Cornell Medicine, Robert A. Harrington, MD. https://www.medscape.com/author/bob-harrington Questions or feedback, please contact news@medscape.net
Sign up for Alex's first live cohort, about Hierarchical Model building!Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian mindset in psychology: Why priors, model checking, and full uncertainty reporting make findings more honest and useful.Intermittent fasting & cognition: A Bayesian meta-analysis suggests effects are context- and age-dependent – and often small but meaningful.Framing matters: The way we frame dietary advice (focus, flexibility, timing) can shape adherence and perceived cognitive benefits.From cravings to choices: Appetite, craving, stress, and mood interact to influence eating and cognitive performance throughout the day.Define before you measure: Clear definitions (and DAGs to encode assumptions) reduce ambiguity and guide better study design.DAGs for causal thinking: Directed acyclic graphs help separate hypotheses from data pipelines and make causal claims auditable.Small effects, big implications: Well-estimated “small” effects can scale to public-health relevance when decisions repeat daily.Teaching by modeling: Helping students write models (not just run them) builds statistical thinking and scientific literacy.Bridging lab and life: Balancing careful experiments with real-world measurement is key to actionable health-psychology insights.Trust through transparency: Openly communicating assumptions, uncertainty, and limitations strengthens scientific credibility.Chapters:10:35 The Struggles of Bayesian Statistics in Psychology22:30 Exploring Appetite and Cognitive Performance29:45 Research Methodology and Causal Inference36:36 Understanding Cravings and Definitions39:02 Intermittent Fasting and Cognitive Performance42:57 Practical Recommendations for Intermittent Fasting49:40 Balancing Experimental Psychology and Statistical Modeling55:00 Pressing Questions in Health Psychology01:04:50 Future Directions in ResearchThank you to my Patrons for...
This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong: 00:00 Introduction 09:14 The Big Questions and the new Priesthoods 18:40 Nick Bostrom and Superintelligence 25:10 If anyone builds it, everyone dies and Yudkowsky. 33:32 Prophecy, Inevitability, Induction and Bayesianism. 41:42 Popper, Kuhn, Feyerabend and Lakatos. 49:40 AI researchers ignore The Philosophy of Science. 58:46 A new test for AGI from Sam Altman and David Deutsch? 1:03:35 Accelerationists, Doomers and “Everyone dies”. 1:10:21 Conclusions 1:15:35 Audience Questions