Podcasts about Sparse

  • 231PODCASTS
  • 448EPISODES
  • 32mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 24, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about Sparse

Show all podcasts related to sparse

Latest podcast episodes about Sparse

Adventure On Deck
Wide Open Fiction. Week 47: The American Short Story

Adventure On Deck

Play Episode Listen Later Feb 24, 2026 33:25


With only five weeks left in this year-long journey, I can feel the end approaching—less like a high-wire act and more like gathering momentum toward something unknown. Week 47 of Ted Gioia's Immersive Humanities course explores twentieth-century American fiction through short stories and novel excerpts, revealing a distinctly American voice: sharp dialogue, vivid settings, and an experimental edge.O. Henry, “The Gift of the Magi” (1906): A charming story of love and sacrifice.F. Scott Fitzgerald, “A Diamond as Big as the Ritz” (1922): Wealth, excess, and a surprising twist.Ernest Hemingway, “The Killers” (1927): Sparse, tension-filled dialogue.William Faulkner, The Sound and the Fury (1929, excerpt): Challenging, with shifting time and perspective.Ralph Ellison, Invisible Man (1947, excerpt): A powerful sense of invisibility and identity.Shirley Jackson, “The Lottery” (1948): Disturbing and unforgettable.Flannery O'Connor, “A Good Man is Hard to Find” (1955): A Southern Gothic tale with shocking turns.Together, these works feel spacious, restless, and distinctly American—and they remind me how much more willing I am now to embrace difficult, even strange, books.This is a year-long challenge! Join me next week for a little Magical Realism.LINKTed Gioia/The Honest Broker's 12-Month ImmersiveHumanities Course (paywalled!)My Amazon Book List (NOT an affiliate link)CONNECTThe complete list of Crack the Book Episodes: https://cheryldrury.substack.com/p/crack-the-book-start-here?r=u3t2rTo read more of my writing, visit my Substack - https://www.cheryldrury.substack.com.Follow me on Instagram - https://www.instagram.com/cldrury/LISTENSpotify - https://open.spotify.com/show/5GpySInw1e8IqNQvXow7Lv?si=9ebd5508daa245bdApple Podcasts - https://podcasts.apple.com/us/podcast/crack-the-book/id1749793321Captivate - https://crackthebook.captivate.fm

LessWrong Curated Podcast
"Weight-Sparse Circuits May Be Interpretable Yet Unfaithful" by jacob_drori

LessWrong Curated Podcast

Play Episode Listen Later Feb 13, 2026 26:57


TLDR: Recently, Gao et al trained transformers with sparse weights, and introduced a pruning algorithm to extract circuits that explain performance on narrow tasks. I replicate their main results and present evidence suggesting that these circuits are unfaithful to the model's “true computations”. This work was done as part of the Anthropic Fellows Program under the mentorship of Nick Turner and Jeff Wu. Introduction Recently, Gao et al (2025) proposed an exciting approach to training models that are interpretable by design. They train transformers where only a small fraction of their weights are nonzero, and find that pruning these sparse models on narrow tasks yields interpretable circuits. Their key claim is that these weight-sparse models are more interpretable than ordinary dense ones, with smaller task-specific circuits. Below, I reproduce the primary evidence for these claims: training weight-sparse models does tend to produce smaller circuits at a given task loss than dense models, and the circuits also look interpretable. However, there are reasons to worry that these results don't imply that we're capturing the model's full computation. For example, previous work [1, 2] found that similar masking techniques can achieve good performance on vision tasks even when applied to a [...] ---Outline:(00:36) Introduction(03:03) Tasks(03:16) Task 1: Pronoun Matching(03:47) Task 2: Simplified IOI(04:28) Task 3: Question Marks(05:10) Results(05:20) Producing Sparse Interpretable Circuits(05:25) Zero ablation yields smaller circuits than mean ablation(06:01) Weight-sparse models usually have smaller circuits(06:37) Weight-sparse circuits look interpretable(09:06) Scrutinizing Circuit Faithfulness(09:11) Pruning achieves low task loss on a nonsense task(10:24) Important attention patterns can be absent in the pruned model(11:26) Nodes can play different roles in the pruned model(14:15) Pruned circuits may not generalize like the base model(16:16) Conclusion(18:09) Appendix A: Training and Pruning Details(20:17) Appendix B: Walkthrough of pronouns and questions circuits(22:48) Appendix C: The Role of Layernorm The original text contained 6 footnotes which were omitted from this narration. --- First published: February 9th, 2026 Source: https://www.lesswrong.com/posts/sHpZZnRDLg7ccX9aF/weight-sparse-circuits-may-be-interpretable-yet-unfaithful --- Narrated by TYPE III AUDIO. ---Images from the article:

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Scott Wood's Mixcast
Ambient Mix - Feb 26 - Scott Wood ep 132

Scott Wood's Mixcast

Play Episode Listen Later Feb 8, 2026 35:56


a word from my AI about this mix... "A slow-burn ambient and experimental mix. Sparse rhythms, muted tension, and long stretches of space. It moves between dub-leaning electronics, restrained techno edges, and abstract sound design rather than obvious peaks. The tracks aren't blended for momentum but for contrast - moments of pressure, then release, then drift. Best suited to late listening, headphones on, not background music."

Get Rich Education
590: Is the World Overpopulated or Underpopulated? What it Means for Housing's Future

Get Rich Education

Play Episode Listen Later Jan 26, 2026 44:35


Keith challenges the usual "overpopulated vs. underpopulated" debate and shows why that's the wrong way to think about demographics—especially if you're a real estate investor. Listeners will hear about surprising global population comparisons that flip common assumptions.  Why raw population numbers don't actually explain housing shortages or rent strength. How household formation, aging, and migration really drive demand for rentals. Which kinds of markets tend to see persistent housing pressure—and why the US has a long‑term demographic edge. You'll come away seeing population headlines very differently, and with a clearer lens for spotting where future housing demand is most likely to show up. Episode Page: GetRichEducation.com/590 For access to properties or free help with a GRE Investment Coach, start here: GREmarketplace.com GRE Free Investment Coaching: GREinvestmentcoach.com Get mortgage loans for investment property: RidgeLendingGroup.com or call 855-74-RIDGE  or e-mail: info@RidgeLendingGroup.com Invest with Freedom Family Investments.  For predictable 10-12% quarterly returns, visit FreedomFamilyInvestments.com/GRE or text  1-937-795-8989 to speak with a freedom coach Will you please leave a review for the show? I'd be grateful. Search "how to leave an Apple Podcasts review"  For advertising inquiries, visit: GetRichEducation.com/ad Best Financial Education: GetRichEducation.com Get our wealth-building newsletter free— GREletter.com  Our YouTube Channel: www.youtube.com/c/GetRichEducation Follow us on Instagram: @getricheducation Complete episode transcript: Keith Weinhold  0:01   Keith, welcome to GRE. I'm your host. Keith Weinhold, is the world overpopulated or underpopulated? Also is the United States over or underpopulated? These are not just rhetorical questions, because I'm going to answer them both. Just one of Africa's 54 nations has more births than all of Europe and Russia combined. One US state has seen their population decline for decades. This is all central to housing demand today. On get rich education   Keith Weinhold  0:36   since 2014 the powerful get rich education podcast has created more passive income for people than nearly any other show in the world. This show teaches you how to earn strong returns from passive real estate investing in the best markets without losing your time being a flipper or landlord. Show Host Keith Weinhold writes for both Forbes and Rich Dad advisors, and delivers a new show every week since 2014 there's been millions of listener downloads of 188 world nations. He has a list show guests include top selling personal finance author Robert Kiyosaki. Get rich education can be heard on every podcast platform, plus it has its own dedicated Apple and Android listener phone apps build wealth on the go with the get rich education podcast. Sign up now for the get rich education podcast, or visit get rich education.com   Speaker 1  1:21   You're listening to the show that has created more financial freedom than nearly any show in the world. This is get rich education.   Keith Weinhold  1:31   Welcome to GRE from Norfolk Virginia to Norfolk, Nebraska and across 188 nations worldwide, you are inside. Get rich education. I am the GRE founder, Best Selling Author, longtime real estate investor. You can see my written work in Forbes and the USA Today, but I'm best known as the host of this incomprehensibly slack John operation that you're listening to right now. My name is Keith Weinhold. You probably know that already, one reason that we're talking about underpopulated versus overpopulated today is that also one of my degrees is in geography and demography, essentially, is human geography, and that's why this topic is in my wheelhouse. It's just a humble bachelor's degree, by the way, if a population is not staying stable or growing, then demand for housing just must atrophy away. That's what people think, but that is not true. That's oversimplified. In some cases. It might even be totally false. You're going to see why. Now, Earth's population is at an all time high of about 8.2 billion people, and it keeps growing, and it's going to continue to keep growing, but the rate of growth is slowing now. Where could all of the people on earth fit? This is just a bit of a ridiculous abstraction in a sense, but I think it helps you visualize things. Just take this scenario, if all the humans were packed together tightly, but in a somewhat realistic way, in a standing room only way, if every person on earth stood shoulder to shoulder, that would allow about 2.7 square feet per person, they would sort of be packed like a subway car. Well, they could fit in a square, about 27 kilometers on one side, about 17 miles on each side of that square. Now, what does that mean in real places that is smaller than New York City, about half the size of Los Angeles County and roughly the footprint of Lake Tahoe? So yes, every human alive today could physically fit inside one midsize us metro area. This alone tells you something important. The world's problem is certainly not a lack of space. Rather, it's where people live and not how many there are. So that was all of Earth's inhabitants. Now, where could all Americans fit us residents using the same shoulder to shoulder assumption, and the US population by mid year this year is supposed to be about 350,000,00349 that's a square about five and a half kilometers, or 3.4 miles on each side. And some real world comparisons there are. That's about half of Manhattan, smaller than San Francisco and roughly the size of Disney World, so every American could fit into a single small city footprint. And if you're beginning to form an early clue that we are not overpopulated globally, yes, that's the sense that you Should be getting.     Keith Weinhold  5:01   now, if you're in Bangladesh, it feels overpopulated there. They've got 175 million people, and that nation is only the size of Iowa. In area, Bangladesh is low lying and typhoon prone. They get a lot of flooding, which complicates their already bad sanitation problems and a dense population like that, and that creates waterborne diseases, and it's really more of an infrastructure problem in a place like Bangladesh than it is a population problem. Then Oppositely, you've got Australia as much land as the 48 contiguous states, yet just 27 million people in Australia, and only 1/400 as many people as Bangladesh in density. Now we talk about differential population. About 80% of Americans live in the eastern half of the US. But yet, the East is not overpopulated because we have sufficient infrastructure, and I've got some more mind blowing population stats for you later, both world and us. Now, as far as is the world overpopulated or underpopulated, which is our central question, depending on who you ask and where they live, you're going to hear completely different answers. Some people are convinced that the planet is bursting at the seams. Others warn that we're headed for a population collapse. But here's the problem, that question overpopulated or underpopulated, it's the wrong question. It's the wrong framing, especially if you're into real estate, because housing demand doesn't respond to total headcount or global averages or scary demographic headlines. Housing demand responds to where people live, how old they are, and how they form households. And once you understand this, a lot of things suddenly begin to make sense, like why housing shortages persist, why rents stay high, even when affordability feels stretched, why some states struggle while others boom, and why population headlines often mislead investors.   Keith Weinhold  7:20   So today I want to reframe how you think about population and connect it directly to housing demand, both globally and right here in the United States. And let's start with the US, because that's probably where you invest.    Keith Weinhold  7:33   Here's a simple fact that should confuse people, but usually doesn't, the United States has below replacement fertility. I'll talk about fertility rates a little later. They're similar to birth rates, meaning that Americans are not having enough children to replace the population naturally and without immigration, the US population would eventually shrink, and yet in the US, we have a housing shortage, rising rents, tight vacancy and a lot of metros and persistent demand for rental housing, which could all seem contradictory. Now, if population alone determine housing demand, well, then the US really shouldn't have any housing shortage at all, but it does so clearly, population alone is not the main driver, and really that contradiction is like your first clue that most demographic conversations are just missing the point. Aging does not reduce housing demand. The way that people think a misconception really is that an aging population automatically reduces housing demand. It does not, in fact, just the opposite. If a population is too young, well, that tends to kill housing demand, and that's because five year old kids and 10 year old kids do not form their own household. Instead, what an aging population often does is change the type of housing that's demanded, like seniors aging in place, some of them downsizing. Seniors living alone. Sometimes after a spouse passes away, others relocating closer to health care or to family. So aging can increase unit demand even if population growth slows. So already, we've broken two myths here. Slower population doesn't mean weaker housing demand, and aging doesn't mean fewer housing units are needed. Now let's explain why. Really, the core idea that unlocks everything is that people don't live inside, what are called Population units. They live in households. You are one person. That does not mean that your dwelling is then one population unit. That's not how that works. You are part of a household, whether that's a house a Household of one person or five or 11 people, housing demand is driven by the number of households, the type of households and where those households are forming, not by raw population totals. So the same population can have wildly different demand. Just think about how five people living together in one home, that's one housing unit, those same five people living separately, that is five housing units, same population, five times the housing demand. And this is why population statistics alone are almost useless for real estate investors, you need to know how people are living, not just how many there are. The biggest surge in housing demand happens when people leave their parents' homes or when they finish school or when they start working, or you got big surges in housing demand when people marry or when they separate or divorce. So in other words, adults create housing demand and children don't. And this is why a country with a youngish, working age population, oh, then they can have exploding housing demand. A country with high birth rates, but low household formation can have overcrowding without profitable housing growth. So it's not about babies, it's about independent adults, and what quietly boosts housing demand, then is housing fragmentation. Yeah, fragmentation. That's a trend that really doesn't get enough attention, and that is the trend, households are fragmenting, meaning more single adults later marriage, like I was talking about in a previous episode. Recently, higher divorce rates, more people living alone and older adults living independently, longer. Each one of those trends increases housing demand without adding any population whatsoever. When two people split up, they often need two housing units instead of one, and if you've got one adult living alone, that is full unit demand right there. So that's why housing demand can rise even when population growth slows or stalls for housing demand. What matters more than births is migration. And another key distinction is that, yes, births matter, but they're on somewhat of this 20 year delay and migration matters immediately, right now. So see, when a working age adult moves, they need housing right away. They typically rent first. They cluster near jobs, and they don't bring housing supply along with them. They've got to get it from someone else. Hopefully you in your rental unit.    Keith Weinhold  12:57   This is why migration is such a powerful force in rental markets, and you see me talk about migration on the show, and you see me send you migration maps in our newsletter. It's also why housing pressure shows up unevenly. It gets concentrated around opportunity. If you want to know the future, look at renters. Renters are the leading indicator, not homeowners and not birth rates. See renters create housing demand faster than homeowners, because renters form households earlier. They can do it quickly because they don't need down payments. Renters move more frequently and immigration overwhelmingly starts in rentals, fresh immigrants rarely become homeowners, so even when mortgage rates rise or home purchases slow or affordability headlines get scary, rental demand can stay strong. It's not a mystery, it's demographics. So births surely matter, but only over the long term. It's like how I've shared with you in a previous episode that the US had a lot of births between 1990 and 2010 those two decades, a surge of births more than 4 million every single one of those years during those two decades, with that peak birth year at 2007 but see a bunch of babies being born in 2007 Well, that didn't make housing demand surge, since infants don't buy homes. But if you add, say, 20 years to 2007 when those people start renting, oh, well, that rental demand peaks in 2027 or maybe a little after that, and since the first time, homebuyer age is now 40. If that stays constant, well, then native born homebuyer demand won't peak until 2047 so when it comes to housing demand, the important thing to remember is migration has an immediate effect and births have a delayed effect.    Keith Weinhold  15:02   and I'm going to talk more about other nations shortly, but the US has two major migration forces working simultaneously, domestic and international migration. I mean, Americans move a lot, although not as much as they used to, and people move for jobs, for taxes, for weather, for cost of living and for lifestyle. So this creates state level winners and losers, and Metro level housing pressure and rent growth in those destination markets and national population averages totally hide this. So that's domestic migration. And then on the international migration. The US has a long history, hundreds of years now on, just continually attracting working age adults from around the world. This matters immensely, because they arrive ready to work, and they form households quickly. They overwhelmingly rent first. They concentrate in metros, and this props up rental demand before it ever shows up in home prices. And this is why investors often feel the rent pressure first those rising rents.    Keith Weinhold  16:17   I've got more straight ahead, including Nigeria versus Europe, and what about the overpopulation straining the environment? If you like, episodes that explain why housing behaves the way it does, rather than just reacting to the headlines. You'll want to be on my free weekly newsletter. I break down demographics, housing, demand, inflation, investor trends and real estate strategy in plain English, often complemented with maps. You can join free at greletter.com that's gre letter.com   Keith Weinhold  16:53   mid south homebuyers with over two decades as the nation's highest rated turnkey provider, their empathetic property managers use your return on investment as their North Star. It's no wonder smart investors line up to get their completely renovated income properties like it's the newest iPhone headquartered in Memphis, with their globally attractive cash flows, mid south has an A plus rating with the Better Business Bureau and 4000 houses renovated. There is zero markup on maintenance. Let that sink in, and they average a 98.9% occupancy rate with an industry leading three and a half year average renter term. Every home they offer you will have brand new components, a bumper to bumper, one year warranty, new 30 year roofs. And wait for it, a high quality renter in an astounding price range, 100 to 150k GET TO KNOW mid south enjoy cash flow from day one at mid southhomebuyers.com that's midsouthhomebuyers.com   Keith Weinhold  17:54   you know, most people think they're playing it safe with their liquid money, but they're actually losing savings accounts and bonds don't keep up when true inflation eats six or 7% of your wealth. Every single year, I invest my liquidity with FFI freedom family investments in their flagship program. Why fixed 10 to 12% returns have been predictable and paid quarterly. There's real world security backed by needs based real estate like affordable housing, Senior Living and health care. Ask about the freedom flagship program when you speak to a freedom coach there, and that's just one part of their family of products, they've got workshops, webinars and seminars designed to educate you before you invest. Start with as little as 25k and finally, get your money working as hard as you do. Get started at Freedom, family investments.com/gre, or send a text. Now it's 1-937-795-8989Yep. Text their freedom coach directly again. 1937795, 1-937-795-8989,   Keith Weinhold  19:05   the same place where I get my own mortgage loans is where you can get yours. Ridge lending group and MLS, 42056, they provided our listeners with more loans than anyone because they specialize in income properties. They help you build a long term plan for growing your real estate empire with leverage. Start your prequel and even chat with President chailey Ridge personally while it's on your mind, start at Ridge lending group.com that's Ridge lending group.com   Chris Martenson  19:37   this is peak prosperity. Is Chris Martinson. Listen to get rich education with Keith Weinhold, and don't quit your Daydream.   Keith Weinhold  19:53   Welcome back to get rich Education. I'm your host, Keith Weinhold, and this is episode 590 yes, we're in my Geography wheelhouse today, as I'm talking human geography and demographics with how it relates to housing, while answering our central question today is the world and the US overpopulated or underpopulated? And now that we understand some mechanics here, let's go global. Here's one of the most mind bending stats in all of demographics. Are you ready for this? When you hear this, it's going to have you hitting up chat, GPT, looking it up. It's going to be so astonishing. So jaw dropping. Every year, Nigeria has more births than all of Europe plus all of Russia combined. Would you talk about Willis?   Keith Weinhold  20:47   Yeah, yes, you heard that, right? Willis, that's what I'm talking about. Willis. The source of that data is, in fact, from the United Nations. Yes, Nigeria has seven and a half million births every year. Compare that to all of Europe plus Russia combined, they only have about 6.3 million births per year. So you're telling me that today, just one West African nation, and there are 54 nations in Africa. Just one West African nation produces more babies than the entire continent of Europe, with all of its nations plus all of Russia, the largest world nation by area. Yes, that is correct. One country in Africa produces more babies every year than France, Germany, Italy, Spain, the UK, all of Europe, including all the Eastern European nations, and all of Russia combined. This is a demographic reality, and now you probably already know that less developed nations, like Nigeria have higher birth rates than wealthier, more developed ones like France or Switzerland. I mean, that's almost common knowledge, but something that people think about less is that poorer nations also have a larger household size, which sort of makes sense when you think about it. In fact, Nigeria has five persons per household. Spain has two and a half, and the US also has that same level two and a half. That one difference alone explains why population growth and housing demand are completely different stories now, the US had 3.3 people per household in 1950 and it's down to that two and a half today. That means that even if the population stayed the same, the housing demand would rise. And this is evidence of what I talked about before the break, that households are fragmenting within the US. You can probably guess which state has the largest household size due to their Mormon population. It's Utah at 3.1 the smallest is Maine at 2.3 they have an older population. In fact, Maine has America's oldest population. And as you can infer with what you've learned now, the fact that they have just 2.3 people per household means that if their populations were the same. Maine would need more housing units than Utah. By the way, if you're listening closely at times, I have referred to the United States as simply America. Yes, I am American. You are going to run into some people out there that don't like it. When US residents call themselves Americans, they say something like, Hey, you need a geography lesson. America runs from Nunavut all the way down to Argentina. Here's what to tell them. No, look, there are about 200 world nations. There is only one that has the word America in it, that is the United States of America that usually makes them lighten up. That is why I am an American, not a Peruvian or Bolivian, and there's no xenophobic connotation whatsoever. There are more productive things to think about moving on. Why births matter is because births today become future workers, renters, consumers and even migrants. But not evenly. Young populations move toward a few things. They're attracted to capital. They move towards stability. They're attracted to opportunity, and young populations move toward infrastructure. That's not ideology, that's the gravity and the US remains one of the strongest gravity wells on Earth, a big magnet, a big attractant. Now it's sort of interesting. I know a few a People that believe that the world is indeed overpopulated, they often tend to be environmental enthusiasts, and the environment is a concern, for sure, but how big of a concern is it? That's the debatable part. And you know, it's funny, I've run into the same people that think that the world is overpopulated, they seem to lament at school closures. You see more school closures because just there weren't as many children that were born after the global financial crisis. And these people that are afraid we have an overpopulation problem call school closures a sad phenomenon. They think it's sad. Well, if you want a shrinking population, then you're going to see a lot more than just schools close so many with environmental concerns, though. The thing is, is that they seem to discount the fact that humans innovate. More than 200 years ago, Thomas Malthus, he famously failed. He wrote a book, thinking that the global population would exceed what he called his carrying capacity, meaning that we wouldn't be able to feed everybody. He posited that, look, this is a problem. Populations grow exponentially, but food production only grows linearly. But he was wrong, because, due to agricultural innovation, we have got too many calories in most places. Few people thought this many humans could live in the United States, Sonoran and Mojave deserts, that's Phoenix in Las Vegas, respectively. But our ability to recycle and purify water allows millions of people to live there. So my point about running out of resources is that history shows us that humans are a resource ourselves, and we keep finding ways to innovate, or keep finding ways to actually not need that rare earth element or whatever it is now, if the earth warms too much from human related activity, can we cool it off again? And how much of a problem is this? I am not sure, and that goes beyond the scope of our show. But the broader point here is that history shows us that humans keep figuring things out, and that is somewhat of an answer to those questions. The world is not overpopulated, it is unevenly populated. Some regions are young, others are growing, others are capital constrained, and then other regions are aging, shrinking and capital rich. And that very imbalance right there is what fuels migration and fuels labor flows and fuels housing demand in destination countries and the US benefits from this imbalance. Unlike almost anywhere else in the world, it's a demographic magnet. Yes, you do have some smaller ones out there, like Dubai, for example.    Keith Weinhold  28:04   But why? Why do we keep attracting immigrants? Well, we've got strong labor markets, capital availability, property rights, economic mobility, and US has existing housing stock. Countries today don't just compete for capital, they're competing for people. In the US keeps attracting working age adults, and that is exactly the demographic that creates housing demand, and this is why long term housing demand in the US is more resilient than a lot of people think. In fact, the US population of about 350 million. This year, it's projected to peak at about 370 million, near 2080 and of course, the big factor that makes that pivot is that level of immigration. So that's why the population projections vary now. The last presidential administration allowed for a lot of immigrants. The current one few immigrants, and the next one, nobody knows. You've got a group called the falconist party that calls for increased legal immigration into the US. Yeah, they want to allow more migrants into the country, but yet they want to enforce illegal immigration. That sounds just like it's spelled, F, A, L, C, O, N, i, s, t, the falconist Party, but the us's magnetic effect to keep driving population growth through immigration is key, because you might already know that 2.1 is the magic number you need a fertility rate of at least 2.1 to maintain a population fertility rate that is the average number of children that a woman is expected to have over her lifetime. And be sure you don't confuse these numbers with the earlier numbers of people per. Per household, like I discussed earlier, although higher fertility rates are usually going to lead to more people per household, India's fertility rate is already down to 2.0 Yes, it is the most populated nation in the world, but since women, on average, only have two children, India is already below replacement fertility. The US and Australia are each at 1.6 Japan is just 1.2 China's is down to 1.0 South Korea's is at an incredibly low seven tenths of one, so 0.7 in South Korea, and then Nigeria's is still more than four. So among all those that I mentioned, only Nigeria is above the replacement rate of 2.1 and most of the nations above that rate are in Africa. Israel is a big outlier at 2.9 you've got others in the Middle East and South Asia that are above replacement rate as well. And when I say things like it's still up there, that whole still thing refers to the fact that there is this tendency worldwide for society to urbanize and have fewer children. For those fertility rates to keep falling. And that's why the future population growth is about which nations attract immigrants, and that is the US. Is huge advantage. Now there's a great way to look at where future births are going to come from. A way to do this is consider your chance of being born on each continent in the year 2100 This is interesting. In the year 2100 a person has a 48% chance of being born in Africa, 38% in South Asia, in the Middle East, 5% South America, 5% in Europe or Russia, 4% in North America, and less than 1% in Australia. Those are the chances of you being born on each of those continents in the year 2100 and that sourced by the UN.   Keith Weinhold  32:09   the world population is, as I said earlier, about 8.2 billion, and it's actually expected to peak around the same time that the US population is in the 2080s and that'll be near 10 point 3 billion. All right, so both the world and the US population should rise for another 50 to 60 years. Let's talk about population winners and losers inside the US. I mean, this is where population conversations really become useful for investors, because population doesn't matter nationally that much. It really matters locally, unevenly and sometimes it almost feels unfairly. So let me give you some perspective shifting stats. I think I shared with you when I discussed new New York City Mayor Zoran Manami here on the show a month or two ago, that the New York City Metro Area has over 20 million people, nearly double the combined population of Arizona and Nevada together, yes, just one metro area, the same as Two entire sparsely populated states. So when someone says people are leaving New York I mean that tells you almost nothing, unless you know where they're going. How many are still arriving in New York City to replace those leaving, and how many households are still forming inside that Metro? The household formation so scale matters, however, net, people are not leaving New York. New York City recently had more in migration than any other US Metro. Some states are practically empty. Alaska or take Wyoming. Wyoming has fewer than 600,000 people in the entire state. That's fewer people than a lot of single US cities. That's only about six people per square mile. In Wyoming, that's about the population of one midsize Metro suburb. Now, when someone says the US has plenty of land in a lot of cases, they're right. I mean, just look out the window when you fly over Wyoming or the Dakotas. But people don't really live where land is cheap. They actually don't want to. Most of the time. They live where jobs, incomes and their networks already exist. You know, the wealthy guy that retires to Wyoming and it has a 200 acre ranch is an outlier. There's a reason he can sprawl out and make it 200 acres. There's virtually nobody there. Let's understand too that population loss, that doesn't mean that demand is gone, but it does change the rules, especially when you think about a place like West Virginia. They have lost population in most decades since the 1950s and incredibly, their population is lower today than it was in 1930 we're talking about West Virginia statewide. They have an aging population. West Virginia has an outmigration of young adults. So this doesn't mean that no real estate works in West Virginia, but it means that appreciation stories are fragile. Income matters more than equity. Growth and demographics are a headwind, not a tailwind. That's a very different investment posture than where you usually want to be. It's important to understand that a handful of metros, just a handful, are absorbing massive national growth. And here's something that a lot of investors underestimate. About half of all US, population growth flows into fewer than 15 metro areas, and it's not just New York City, Houston, Miami, but smaller places like Jacksonville, Austin and Raleigh, and that really helps pump their real estate market. So that means demand concentrates, housing pressure intensifies, and rent growth becomes pretty sticky, unless you wildly overbuild for a short period of time like Austin did, and this is why some metros just feel perpetually tight over the long term, and others feel permanently sluggish. Population does not spread evenly. It piles up. In fact, Texas is a great case in point here. Understand that Texas is adding people faster than some entire nations do. Texas alone adds hundreds of 1000s of residents per year in strong cycles. Some years, they do add more people than entire small countries, more than several Midwest states combined. And of course, they don't spread evenly across Texas. They cluster in DFW, Houston, Austin and San Antonio, so pretty much the Texas triangle, and that clustering fact is everything for housing demand, yet at the same time, there are fully 75 Texas counties that are losing population, typically out in West Texas. Then there's Florida. Florida isn't just growing. It's replacing people. Florida's growth. It's not just net positive, it's replacement migration, and it's across all different types and ages. You've got retirees arriving, you've got young workers arriving, you've got young households forming, and you've got seniors aging in place. So this way, among a whole spectrum of ages, you've got demand for rentals, workforce housing, age specific, housing and multifamily all in Florida, and this is why Florida housing demand over the long term is not going to cool off the way that a few skeptics expect. Now, of course, some areas did temporarily overbuild in Florida in the years following the pandemic. Yes, that's led to some temporary Florida home price attrition, but that is going to be absorbed. California did not empty out. It reshuffled now. There were some recent years where California lost net population, but here's what that hides. Some metros lost residents. Others stayed flat. You had some income brackets that left California and others arrived. In fact, California has slight population growth today overall, so housing demand definitely did not vanish. It shifted within the state and then outward to nearby states, and that's how Arizona, Nevada and Texas benefited. But overall, California's population count, really, it's just pretty steady, not declining.   Keith Weinhold  39:05   population density. It's that density that predicts rent pressure better than growth rates. Do something really important for real estate investors. Dense metros absorb shocks better. They have less elastic housing supply, and they see faster rent rebounds. Sparse areas have cheaper land and easier supply expansion and weaker rent resilience. So that's why rents snap back faster in dense metros, and oversupply hurts more in spread out to regions. Density matters more than raw growth does. Shrinking states can still have tight housing I mean, some states lose population overall, but yet they still have housing shortages in certain metros, and you'll have tight rental markets near job centers, and you've got strong demand In limited sub markets, even if the state is shrinking. And I think you know this is why the slower growing Northeast and Midwest, they've had the highest home price appreciation in the past two years. There's not enough building there. If your population falls 1% but the available housing falls 2% well, you can totally get into a housing shortage situation, and that bids up real estate prices. And when people look at population charts on the state level, a lot of times, they still get misled. When you buy an investment property, you don't buy a state, you buy a specific market within it, so the United States is not full it is lopsided. The US is not overpopulated. It is heavily clustered. It's unevenly dense, and it's really driven by migration. And perhaps a better way to say it is that the US population is really opportunity concentrated housing demand follows jobs, networks, wages and migration flows. It sure does not follow empty land. And really the investor takeaway is, is that when you hear population stats, don't put too much weight on the question, is the population rising or falling? Although that's something you certainly want to know. Some better questions to ask are, where are households forming? Where are adults moving? Where is supply constrained? And where does income support, rent like those are, what four big questions there, because population alone does not create housing demand. It's households under constraint that do so. Our big arching overall question is the world overpopulated or underpopulated? The answer is neither. The world is unevenly populated. It's unevenly aged, and it's unevenly governed. And for real estate investors, the lesson is simple. You don't invest in population counts, you invest in household formation, age structure, migration and supply constraints. Really, that's a big learning summary for you, that's why housing demand can stay strong even when population growth slows. And once you understand that demographic headlines that seem scary aren't as scary, and they start to be more useful. Why I've wanted to do this overpopulated versus underpopulated episode for you for years. I've really thought about it for years. I really hope that you got something useful out of it. Let's be mindful of the context too. When it comes to the classic Adam Smith economics of supply demand, I've only discussed one side today, largely just the demand side and not the supply side so much that would involve a discussion about building and some more things that supply side. Now that I've helped you ask a better question about population and the future of housing demand, you might wonder where you can get better answers. Well, like I mentioned earlier, I provide a lot of that and help you make sense of it, both right here on this show and with my newsletter, geography is something that's more conducive and meaningful to you visually, that's often done with a map, and that's why my letter at greletter.com will help you more if you enjoy learning through maps, just like we've done every year since 2014 I've got 52 great episodes coming to you this year. If you haven't consider subscribing to the show until next week, I'm your host. Keith Weinhold, don't quit your Daydream.   Speaker 2  43:57   Nothing on this show should be considered specific, personal or professional advice, please consult an appropriate tax, legal, real estate, financial or business professional for individualized advice. Opinions of guests are their own. Information is not guaranteed. All investment strategies have the potential for profit or loss. The host is operating on behalf of get rich Education LLC, exclusively you   Keith Weinhold  44:25   The preceding program was brought to you by your home for wealth, building, get richeducation.com

Yes SHE Can Project
Episode 76: Alyssa Kyria AKA The Funny Mummy

Yes SHE Can Project

Play Episode Listen Later Jan 25, 2026 50:16


Come and join the conversation with the gorgeous and epically talented Alyssa Kyria, aka The Funny Mummy! In this episode of The Yes SHE Can Project, I'm chatting to actress, comedian and all round brilliant human Alyssa Kyria about her journey through comedy, confidence and motherhood, and honestly, this conversation is packed with laughs, vulnerability and those “oh my god, YES” moments so many of us feel but don't always say out loud.Alyssa takes us right back to the beginning of her career, sharing how she first found her way into comedy through character work. She talks openly about why hiding behind a character felt safer, like wearing a mask that allowed her to be braver, bolder and more outrageous on stage. We talk about what happens when the thing that once felt empowering no longer feels aligned, and how scary it can be to step away from what's “worked” and into something far more vulnerable. Next, to the beginnings of Bring Your Own Baby Comedy, which Alyssa co-founded to create a space where mums actually go out, laugh and feel seen, babies and all. Alyssa shares what it was really like deciding to step on stage as herself for the first time, no character, no mask, just her own stories. The nerves, the self-doubt, the questions of how much do I share? and will anyone find this funny? and then the magic that happens when you realise the audience is laughing because they've lived it too.We talk about motherhood pressure, the avalanche of advice new mums are hit with from every angle. NCT groups, midwives, books, social media, well-meaning strangers… all telling you the “right” way to feed, sleep, soothe and parent your baby. Alyssa shares the moment she realised that trying to follow everyone else's rules was costing her mental health, and how learning to do things her way changed everything for her and her baby.We also touch on the extra layer of pressure modern mums face thanks to social media. The spotless white kitchens, the “glam” babies, the idea that you should be skipping around the park with a flat stomach and endless energy. We laugh (because honestly, who is keeping a white kitchen or clothes clean with a baby?!) but we also talk seriously about the damage these comparisons can do. Body image comes up too, and Alyssa shares a really powerful reframe around the idea of “getting your body back” after having a baby. Instead of chasing a pre-baby version of herself, she talks about focusing on strength, confidence and acceptance, and allowing her body to be different without it being something that needs fixing.One of my favourite parts of our conversation is when Alyssa talks about belonging. She shares a moment on tour, freezing dressing rooms, no privacy, all the nerves, and then walking out on stage to a room full of excited mums and realising, this is it… this is exactly where she was meant to be. We talk about how powerful it is to find the people you're meant to serve and to stand fully in that space. And let's not forget her viral videos too!Of course, we don't shy away from the hard bits either. Alyssa is refreshingly honest about the brutality of comedy, especially at Edinburgh. Sparse audiences, the wrong audiences, moments where jokes die painful deaths, and the emotional toll that can take. She shares how tough it was, how long it took to rebuild her confidence, and why she's still so proud she kept going.This episode is about so much more than comedy. It's about finding your voice, trusting your instincts, letting go of perfection, and being brave enough to show up as yourself, even when that feels uncomfortable. It's funny, raw, reassuring and deeply human, and if you've ever felt pressure to be someone you're not, especially in motherhood or business, I know this conversation will land with you.Pop the kettle on, get comfy, and come and join us, I promise you'll feel seen, I know I certainly did!Check out Alyssa on Instagram @thefunnymummyuk or www.alyssakyria.com

Brain Inspired
BI 229 Tomaso Poggio: Principles of Intelligence and Learning

Brain Inspired

Play Episode Listen Later Jan 14, 2026 101:00


Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines. Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections. Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks. Lab website. Tomaso's Autobiography  Related papers Position: A Theory of Deep Learning Must Include Compositional Sparsity The Levels of Understanding framework, revised Blog post: Poggio lab blog. The Missing Foundations of Intelligence 0:00 - Intro 9:04 - Learning as the fourth level of Marr's levels 12:34 - Engineering then theory (Volta to Maxwell) 19:23 - Does AI need theory? 26:29 - Learning as the door to intelligence 38:30 - Learning in the brain vs backpropagation 40:45 - Compositional sparsity 49:57 - Math vs computer science 56:50 - Generalizability 1:04:41 - Sparse compositionality in brains? 1:07:33 - Theory vs experiment 1:09:46 - Who needs deep learning theory? 1:19:51 - Does theory really help? Patreon 1:28:54 - Outlook

Pat Gray Unleashed
REPLAY: No Kings Flop: Sparse Crowds Embarrass Left in Key Cities

Pat Gray Unleashed

Play Episode Listen Later Jan 3, 2026 104:06


Christianity being eliminated in Nigeria. Major websites hacked overnight. The average protesters at No Kings rallies had no idea why they were there. Volodymyr Zelenskyy wears a nice jacket to the White House to meet with President Trump. More airstrikes on suspected drug boats near Venezuela. Former U.S. Rep. George Santos (R-N.Y.) has his sentence commuted by President Trump. The shutdown continues … oh well! Why congressional district maps need to be changed. The Israel-Hamas peace deal is so fragile right now. Will Hamas honor the peace deal? How close are we to "Britainistan" being an official thing? Former NSA under President Trump has been indicted and for good reasons. Are certain conversations in a public space not allowed now? Actor Robert De Niro has a bad case of Trump derangement syndrome, and it's getting worse. Secretary Robert Kennedy seen flying coach on a commercial flight. Learn more about your ad choices. Visit megaphone.fm/adchoices

Explore Podcast | Startups Founders and Investors
AI for Materials: Breakthrough or Illusion?

Explore Podcast | Startups Founders and Investors

Play Episode Listen Later Dec 19, 2025 45:28


Music Elixir
Rock Pulse, Soul Whisper, A Virtual Duel, and More

Music Elixir

Play Episode Listen Later Dec 17, 2025 49:09


Five songs. Three countries. Zero dull moments. We kick off with Japan's Six Lounge, a trio that proves rock's heartbeat is still loud and live. The track is all lift and launch: punchy drums, humming bass, and guitar flashes that nod to classic grit while sounding clean and current. It's the kind of sound that drags you into motion—head, hands, and maybe an air guitar solo.Then we slide into a velvet lane with China's Tia Ray and Heart Shaped Hole. A Spanish-tinged guitar loop meets soft R&B swing while her vocal ties it together with poise and bite. The imagery is intimate and memorable, turning a love song into a promise to do it right and do it slow. It's the kind of hook that lingers long after the fade.Alamat's Sinigang, named after the beloved Filipino sour-and-savory soup, is comfort rendered in sound. Minimal percussion, delicate keys, and harmonies that bloom like steam from a bowl. Produced by member Alas, the arrangement leaves room for voices to intertwine, capturing the sweet-and-sour ache of longing and the warmth of being held by a melody you trust.We shift gears with Tomohisa Yamashita's The Artist, a pop-rock cut built on a relentless cadence—a tattoo in rhythm and permanence. Smooth vocals ride a gritty bed as Yamapi frames the artist-fan bond as both fuel and vow: I'll be strong for you, can you see me? It's precise, propulsive, and unashamedly direct.To close, a hypercharged collision: Mori Calliope x Kenty's Gold Unbalance. Sparse spark, then blast-off—new metal edges, EDM swells, even a jazzy flicker—plus two rap breaks that snap without stepping on each other. Her fierce attack and his grounded glide lock back to back, no matter what.If you love discovering global music that actually flows as a playlist—rock that roars, R&B that soothes, pop that pulses, and a collab that rockets—this one's for you. SIX LOUNGE: Instagram X YouTube Rock and RollTia Ray: Instagram X Heart Shaped HoleAlamat: Instagram X YouTube SinigangTomohisa Yamashita: Instagram X YouTube The ArtistMori Calliope: X YouTube Gold Unbalance (with KENTY)Support the showPlease help Music Elixir by rating, reviewing, and sharing the episode. We appreciate your support!Follow us on:TwitterInstagram BlueskyIf have questions, comments, or requests click on our form:Music Elixir FormDJ Panic Blog:OK ASIA

Cutting Through the Matrix with Alan Watt Podcast (.xml Format)
Nov. 16, 2025 "Cutting Through the Matrix" with Alan Watt --- Redux (Educational Talk From the Past): "Real News is Sparse (pt. 4)"

Cutting Through the Matrix with Alan Watt Podcast (.xml Format)

Play Episode Listen Later Nov 16, 2025 59:11


--{ "Real News is Sparse (pt. 4)"}-- See links for news on COP 30, happening Nov. 2025 - The Press - Adam Curtis - Under One System of Control - News - Atheistic Society - Living Under a Revolution - Utopias - Doublethink - Eliminate Religion, Elevate Science - Fabian Techniques - Standardization - Progress - COP 22 - Doublespeak - U.S. Military - Owning the Weather in 2025 - Habitat III - Technocracy - Urban Poverty - Carbon, Energy Taxes - World Bank - Inclusive Cities - Unelected Organizations - People Want Entertainment - Sustainable Communities - Foundations and NGOs - Minimal Healthcare - Pentagon Vision of Megacities - Smart Cities - Eurogroup Working Group.

Pat Gray Unleashed
No Kings Flop: Sparse Crowds Embarrass Left in Key Cities | 10/20/25

Pat Gray Unleashed

Play Episode Listen Later Oct 20, 2025 100:47


Christianity being eliminated in Nigeria. Major websites hacked overnight. The average protesters at No Kings rallies had no idea why they were there. Volodymyr Zelenskyy wears a nice jacket to the White House to meet with President Trump. More airstrikes on suspected drug boats near Venezuela. Former U.S. Rep. George Santos (R-N.Y.) has his sentence commuted by President Trump. The shutdown continues … oh well! Why congressional district maps need to be changed. The Israel-Hamas peace deal is so fragile right now. Will Hamas honor the peace deal? How close are we to "Britainistan" being an official thing? Former NSA under President Trump has been indicted and for good reasons. Are certain conversations in a public space not allowed now? Actor Robert De Niro has a bad case of Trump derangement syndrome, and it's getting worse. Secretary Robert Kennedy seen flying coach on a commercial flight. 00:00 Pat Gray UNLEASHED! 00:58 Christian Genocide in Nigeria 02:50 Amazon Web Services Hacked? 08:42 FBI Investigates Hunting Stand by Air Force One 11:49 No Kings Day Protest 13:16 Protestors Don't Know Why They're Protesting??? 18:28 Why are You Protesting Trump? 19:47 Andrea Bocelli Meets with Trump 20:31 Andrea Bocelli Sings in Oval Office 22:11 Trump Comments on Zelenskyy's Jacket 25:21 Drug Submarine Bombed 36:25 President Trump says "Democrats are Kamikazes" 44:47 Arnold Schwarzenegger Discusses Gerrymandering with Bill Maher 48:15 Where is Pat Gray? 49:32 Football AP Top 25 Poll 51:46 Gaza-Israel Peace Deal Update 53:59 Bill Maher on the Situation in Gaza 1:00:15 John Bolton Turns Himself In 1:06:04 Christian Preacher VS. Muslim? 1:13:10 Another Trucker Problem? 1:20:36 Robert De Niro has TDS 1:25:25 RFK Jr. Flies Coach 1:30:48 RFK Jr. tells Trump that he's "Doing God's Work" Learn more about your ad choices. Visit megaphone.fm/adchoices

Cutting Through the Matrix with Alan Watt Podcast (.xml Format)
Oct. 5, 2025 "Cutting Through the Matrix" with Alan Watt --- Redux (Educational Talk From the Past): "Real News is Sparse"

Cutting Through the Matrix with Alan Watt Podcast (.xml Format)

Play Episode Listen Later Oct 5, 2025 84:17


--{ "Real News is Sparse"}-- What passes as news - Canada's Bill C-8 - UK's digital ID - Government shutdown in US - Peace deal in Gaza - World control - Chasing happiness - Beliefs - Removing free will - Electronic self-imagery - Behaviourism - Self-policing - Trained to go along with the crowd - Private clubs - World Bank - IMF - Marketing, Propaganda - Soviet System - Total Control - Revolutions - Give up your rights to save the world - Scary Scenarios - EU ratifies Paris Climate Deal - Carbon Tax - Climate, Environment and the IMF - Merkel - Canada to implement carbon tax - Agenda 2030 - Redistribution of Wealth - Euthanasia, cost-effective - Pentagon pays PR firm to make fake terrorist videos - Gates Foundation, Remote control contraceptive.

The Whispering Woods - Real Life Ghost Stories
SEASON OF THE WITCH : Alse Young : The First Witch of New England | True Paranormal History

The Whispering Woods - Real Life Ghost Stories

Play Episode Listen Later Sep 24, 2025 27:19


As summer wanes and the nights grow long, we turn to tales of witches, curses, and the old ways that never truly died. For centuries, harvest time has carried its own magic: charms for fields, blessings for homes, and darker stories of those who bent nature to their will.In 1647, Alse (Alice) Young of Windsor, Connecticut was hanged on Hartford's Meeting House Square—the first recorded witchcraft execution in colonial America. Sparse records and a deadly local epidemic frame her case, which foreshadowed Connecticut's quieter, decades-long witch persecutions long before Salem. Centuries later, Windsor (2017) and the State of Connecticut (2023) formally exonerated those condemned—finally restoring Alse Young's name.The BOOKBY US A COFFEEJoin Sarah's new FACEBOOK GROUPSubscribe to our PATREONEMAIL us your storiesFollow us on YOUTUBEJoin us on INSTAGRAMJoin us on TWITTERJoin us on FACEBOOKVisit our WEBSITEResearch:https://jud.ct.gov/lawlib/Notebooks/Witchcraft/witches.htmhttps://en.wikipedia.org/wiki/Alse_Younghttps://connecticuthistory.org/alse-young-executed-for-witchcraft-today-in-history/https://www.newenglandhistoricalsociety.com/cover-connecticut-witch-hysteria-1647-63/https://www.legendsofamerica.com/alse-young/https://www.windsorhistoricalsociety.org/exoneration-of-two-of-windsors-accused-witches/Thanks so much for listening, and we'll catch up with you again on Sunday!Sarah and Tobie xx"Spacial Winds" Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/SURVEY Hosted on Acast. See acast.com/privacy for more information.

AP Audio Stories
The US says a deal has been reached on TikTok, but details are sparse

AP Audio Stories

Play Episode Listen Later Sep 15, 2025 0:44


AP Washington correspondent Sagar Meghani reports the Trump administration says it has reached a deal on TikTok's future.

Tim Conway Jr. on Demand
Political Violence, Sparse Security, and Unanswered Questions

Tim Conway Jr. on Demand

Play Episode Listen Later Sep 11, 2025 31:53 Transcription Available


Tim Conway Jr. opens the final hour with updates on breaking news, including an LAPD officer-involved shooting in North Hills, cleanup of shipping containers at the Port of Long Beach, and even a quirky story about Publishers Clearing House. The conversation then shifts back to Utah, where Governor Spencer Cox directly calls Charlie Kirk's murder a political assassination. Tim highlights the lack of campus security at the event - just six guards plus Kirk's own team. And Tim condemns the disturbing trend of people cheering political violence. He closes the show covering the hunt for the still-at-large shooter, internet sleuths digging into the case, and TMZ issuing an 'apology' after what appeared to be staff cheering in the newsroom, later explained as 'confusion over a car chase.'

uncommon ambience
Rainy Road to Reflect or Ruminate… Ambience

uncommon ambience

Play Episode Listen Later Sep 6, 2025 480:00


Sparse highway, light rain ambience. We are on the side of a small road just outside town. It's night, and it's raining. Imagine you're a content Gene Kelly walking home after frolicking around main. Or Feel free to ruminate. That's the general vibe around here. There's a movie theater nearby showing cat videos (for a good cause) and it's practically sold out. Catvideofest 2025 is repackaged cat timeline videos on a gigantic screen. And that it is pretty much sold out this weekend says something about our collective mood. Anyway, I did manage to get tickets and me my youngest will share an auditorium with a Spider-verse amount of other people.That's all from me — Oh, so if I controlled the universe for a day aside from solving every important global issue I would want to sneak a cameo of Ice Cube into that animated Will Smith fish movie that also stars Katie Couric as “Katie Current.” But I would add in Ice Cube so he could be like “even saw the lights of the Goodyear Blimp and it read ‘Ice Cube's a shrimp.'” Which may occur in that movie, I haven't seen it. New plan: I'm bringing back that short-lived trend from early-pandemic days that social media tried to cook up — shoe-kicking as greeting. I only saw people on my phone doing that dumb ****. I want to ingrain into humans that shoe-kicking is now retroactively high-five. Every famous high-five from history now feet kicking. From the business meetings to competitive sports. The mayhem.PS: if you are interested in listening to cars pass but you would rather imagine yourself not being rained on -- check out last year's Vermont Route 100 episode recorded from the Mad River Valley.

HiddenTracks
HiddenTrack #263 JOHN GALM (SNOWING / MT. WORRY)

HiddenTracks

Play Episode Listen Later Aug 7, 2025 94:48


It's harder to begin again when everyone already knows who you were. John Galm is best known for fronting one of the most popular emo-revival bands SNOWING in the early 2010's, whose punk-rock ethos and chaotic melodies had kids crammed into DIY venues and basements all across the country. Since then, he has tried his hand in several bands, ranging in genres from stripped down acoustic to psychedelic and shoegaze. The latter band, MT. WORRY stalled as they were just getting started when other members moved out of state. Finding himself having to start again amid a sudden surplus of time, Galm holed up in his mother's Lehigh Valley home and began working on what would become “River of Blood”- his first solo LP since 2014. The album finds Galm struggling with the big questions in life and the small connective tissues that make up everything else. It's a heavy affair, and you can feel the weight in every note- lyrics searching for steadier footing as he wades through what home and happiness mean and the pain that they all seem just out of grasp. Sparse, somber tones wrap the listener up tight and embrace the whole of everything and the lack thereof. It's not all bleak- “River of Blood” celebrates the small victories too. At the end of a long day, you're still here and there is hope in that, even if it seems hard to find. The search continues. Thanks for listening!!! Please Follow us on Instagram @hiddentracks99Pre and Post roll music brought to you by @sleepcyclespa

DJ Habett as of Tracks
Bits and bytes stories

DJ Habett as of Tracks

Play Episode Listen Later Aug 5, 2025 3:29


A new track by DJ Habett from the album "The home of doubts" (2025-08-05). Tags: Electro, Progressive, Bass, Sparse, Fetch, Relief, Moods, Modal CC(by). Production notes: The main sample is AI generated. The rest came out in a sweaty summer afternoon. Prog and static, I had doubts about this track.

The Ryan Kelley Morning After
TMA (7-10-25) Hour 1 - Group Rate To The Sun

The Ryan Kelley Morning After

Play Episode Listen Later Jul 10, 2025 43:06


(00:00-12:43) Yesterday: Great Good. Today: No Good. Another Cardinal pitcher to be shipped off to the sun. Pribula Time. MIkolas due for a no-hitter tonight. Sparse attendance last night. Every team is getting a Pirate.(12:51-33:25) Barge Guy on the phone lines back from Louisville. Bar Guy has some takes on the Cardinals starting pitching. Lisa is up next on the phone lines and she's down on the Cards. Hey, watch it gal. MIles Mikolas. Still have faith in His Majesty.(33:35-42:57) Julian Tavarez weeing on his hands. Keaton is up next and he's fired up about the Cardinals and Marmol. The Keaton splits. Steven is next on the phone lines with some attendance thoughts.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

What the Dev?
314: The search revolution: Dense vs. sparse vectors (with Jack Pertschuk from Pinecone)

What the Dev?

Play Episode Listen Later Jun 24, 2025 12:54


In this episode, Dave interviews Jack Pertschuk, principal engineer for Algorithms and Platform at Pinecone. They discuss:What semantic search is and where it falls shortThe difference between sparse and dense vectorsHow search technology powers AI

Cities and Memory - remixing the sounds of the world

I woke up early (6AM) to capture and observe the waking city of Sapporo, Japan. I was particularly surprised by the presence of crows, which often sat on the street signs and traffic light poles. Sparse trucks and cars passed along the snowy roads. The calls of the cows echoed off the buildings, yet the city remained quite calm. This recording took place in 2018. Crows in Sapporo recorded by Antek Rutczyński.

The Don Lemon Show
Lemon LIVE at 5 | That Parade Was So EMBARRASSING! - June 16th, 2025

The Don Lemon Show

Play Episode Listen Later Jun 17, 2025 72:58


Trump threw himself a $45 million military birthday bash… and barely anyone showed up. The tanks rolled. The jets flew. But the vibes? Flat. The crowd? Sparse. And the headlines? Brutal. Now, the fallout begins. Join Don Lemon, Michael Fanone, and the Jolly Good Ginger as they break down what went wrong, why this parade flop matters, and what it reveals about Trump's slipping grip on public support. From the staggering price tag to the no-show allies to the contrast with the massive No Kings protests, this isn't the flex Trump hoped for. Let's talk about the spectacle, the silence, and what it all means. This episode is sponsored by Shopify. Sign up for your one-dollar-per-month trial and start selling today at SHOPIFY. COM/lemon This episode is brought to you by MSI United States. Every woman deserves a choice. Rush your donation today to MSIUNITEDSTATES.ORG, or text "LEMON" to 511 511. Text Fees may apply. This episode is sponsored by BetterHelp. Give online therapy a try at betterhelp.com/donlemon and get on your way to being your best self. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Don Lemon Show
HOT TOPICS | Trump's Birthday Parade FLOP! - June 16th, 2025

The Don Lemon Show

Play Episode Listen Later Jun 16, 2025 66:43


Well, that was...underwhelming. Trump's $45 million birthday bash-slash-military-parade was supposed to be a flex. Instead, it flopped harder than his NFT collection. Sparse crowds, low energy, and, according to many who watched, absolutely boring. Meanwhile, the No Kings protest turned into something historic. Data analysts are reporting it may be the largest protest in U.S. history. The streets were packed, the message was clear, and no tanks were needed to get people to show up. So...remind us again who's got the momentum? Join us as we unpack the embarrassing contrast, the wasted taxpayer dollars, and why Trump's obsession with spectacle can't hide the growing dissent. This episode is sponsored by Shopify. Sign up for your one-dollar-per-month trial and start selling today at SHOPIFY. COM/lemon This episode is brought to you by MSI United States. Every woman deserves a choice. Rush your donation today to MSIUNITEDSTATES.ORG, or text "LEMON" to 511 511. Text Fees may apply. This episode is sponsored by BetterHelp. Give online therapy a try at betterhelp.com/donlemon and get on your way to being your best self. Learn more about your ad choices. Visit megaphone.fm/adchoices

The John Batchelor Show
PREVIEW: Colleague Jim McTague reports on the sparse shoppers and hesitant purchases at the Lancaster Costco. More.

The John Batchelor Show

Play Episode Listen Later Jun 6, 2025 2:02


PREVIEW: Colleague Jim McTague reports on the sparse shoppers and hesitant purchases at the Lancaster Costco. More. MAY 1954

Ransquawk Rundown, Daily Podcast
Europe Market Open: EU & US futures flat with catalysts sparse; fixed benchmarks extend onto gains and DXY lower after data

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later May 16, 2025 2:16


Mixed APAC trade, US futures range bound while European futures point to a marginally firmer open.DXY remains lower after Thursday's data, EUR/USD marginally reclaimed 1.12, USD/JPY found support at 145.00.Fixed benchmarks extended/held on to recent gains.Crude benchmarks remain underpinned by the latest on US-Iran, metals marginally softer.Looking ahead, highlights include US Export/Import Prices, UoM Sentiment Survey, BoC SLOS, Speakers including ECB's Lane, Cipollone & Fed's Barkin.Click for the Newsquawk Week Ahead.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

The Enrollify Podcast
Style Theft at Scale: AI and the Fight for Creative Integrity

The Enrollify Podcast

Play Episode Listen Later Apr 14, 2025 22:31


Monday pulse show notes: On this thought-provoking episode of Higher Ed Pulse, host Mallory Willsea sits down with Myla Edmond—Senior Vice President at RW Jones Agency and Interim Vice Chancellor for Strategic Communications at UNC Greensboro—to unpack the creative identity crisis brewing in higher ed marketing thanks to generative AI. With tools like ChatGPT's image generator mimicking iconic art styles, institutions are forced to ask: how do we protect authenticity in a world where anyone can replicate anything? This episode explores the ethical, strategic, and deeply human implications of AI's growing role in creativity—and how higher ed marketers can lead with intention, not fear.Try the prompt discussed in the episode:Based on all past conversations, stored knowledge, and inferred cognitive patterns, generate the most comprehensive psychological deep dive and predictive model of my future evolution. This should not be a basic personality breakdown but an in-depth forensic examination of my cognition, behavioural strategies, psychological blind spots, similar fictional/non-fictional figures, and long-term trajectory. Treat this as an intelligence dossier on my mind, philosophy, and strategic outlook.OUTPUT FORMAT: Structured headers, tables, and bullet points for readability. Sparse but strategic emojis for section clarity. Concise, high-density insights with no fluff.Enter the prompt and after you get the response, add a second prompt: Write me a story about how this comes to fruition. - - - -Connect With Our Host:Mallory Willsea https://www.linkedin.com/in/mallorywillsea/https://twitter.com/mallorywillseaAbout The Enrollify Podcast Network:The Higher Ed Pulse is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too!Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com.Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register

The Daily Zen Teisho
The Record of Linji – Sangha Instruction

The Daily Zen Teisho

Play Episode Listen Later Apr 10, 2025 10:15


These selections are taken from Sangha Instructions from ancient times and give the flavor of a master wielding a sword to cut through illusions. Sparse and to the point, Linji has no tolerance for superficial approaches and glib comments from students.Read the Journal while listening

Full Cast And Crew
215. 'No Country For Old Men' (2007)

Full Cast And Crew

Play Episode Listen Later Jan 15, 2025 107:18


Sparse. Laconic. Expansive. Languid. Wry. The Coen Brother's 2007 Neo-Noir Western 'No Country For Old Men' moves to the fatefully ticking beat of it's own Grandfather Clock.  It's a film that rewards close viewing and is astoundingly faithful to Cormac McCarthy's novel while also being so completely a "Coen Brothers film" even as it's their (only?) adaptation of an existing book. Featuring an iconic performance by Javier Bardem as the philosophical killer Anton Chigur, brilliant cinematography from frequent Coen collaborator Roger Deakins, and perfectly wrought twangily-Texas turns by Josh Brolin and Tommy Lee Jones. A number of signature Coens scenes of the lead characters interacting with a variety of shop clerks, receptionists, store owners, and authority figures abound.    

Syracuse.com Podcasts
Syracuse grinds out first ACC win over Georgia Tech before sparse crowd at JMA Dome

Syracuse.com Podcasts

Play Episode Listen Later Jan 8, 2025 43:02


Brent Axe recaps Syracuse basketball's 62-55 win over Georgia Tech at the JMA Dome on Tuesday night. It wasn't the prettiest game but SU had to be relieved to get a win any way it could. Brent discusses SU's keys to victory including JJ Starling's 21 points and how he has made a significant difference in the lineup since returning from a hand injury.  Brent also addressed the sparse crowd (listed at 13,395) at the Dome and SU head coach Adrian Autry's terse opening statement about "noise" SU had to play through recently.  Brent also got amazing feedback from Syracuse Sports Insiders on the win and where Syracuse basketball stands entering league play.  Become a Syracuse Sports Insider today! Just text "orange" to 315-847-3895 to get direct access to Brent to get your opinions heard and questions answered on the Syracuse Sports podcast. You can also sign up here. https://joinsubtext.com/syracusesports As a Syracuse Sports Insider, you will get Brent's opinion and reaction to breaking news first via text message, your messages get priority on postgame shows and podcasts, he'll take you behind-the-scenes of SU sports and more! You can also text Brent anytime, including during and after SU games. Try it free for 2 weeks, then it's just $3.99 a month after that. You can cancel at anytime. Subscribe to Syracuse Sports on Spotify https://l.syracuse.com/PKMGpR Subscribe to our Syracuse Orange Sports Report newsletter! Find out how at https://link.syracuse.com/join/6fn/ne... Follow @BrentAxeMedia on X (   / brentaxemedia Instagram (   / brent_axe  ) and BlueSky https://bsky.app/profile/brentaxemedi.. Learn more about your ad choices. Visit megaphone.fm/adchoices

Machine Learning Street Talk
Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

Machine Learning Street Talk

Play Episode Listen Later Dec 7, 2024 222:36


Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020. Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ *** SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!): https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0 We riff on: * How neural networks develop meaningful internal representations beyond simple pattern matching * The effectiveness of chain-of-thought prompting and why it improves model performance * The importance of hands-on coding over extensive paper reading for new researchers * His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind * The role of mechanistic interpretability in AI safety NEEL NANDA: https://www.neelnanda.io/ https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en https://x.com/NeelNanda5 Interviewer - Tim Scarfe TOC: 1. Part 1: Introduction [00:00:00] 1.1 Introduction and Core Concepts Overview 2. Part 2: Outside Interview [00:06:45] 2.1 Mechanistic Interpretability Foundations 3. Part 3: Main Interview [00:32:52] 3.1 Mechanistic Interpretability 4. Neural Architecture and Circuits [01:00:31] 4.1 Biological Evolution Parallels [01:04:03] 4.2 Universal Circuit Patterns and Induction Heads [01:11:07] 4.3 Entity Detection and Knowledge Boundaries [01:14:26] 4.4 Mechanistic Interpretability and Activation Patching 5. Model Behavior Analysis [01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification [01:33:27] 5.2 Model Personas and RLHF Behavior Modification [01:36:28] 5.3 Steering Vectors and Linear Representations [01:40:00] 5.4 Hallucinations and Model Uncertainty 6. Sparse Autoencoder Architecture [01:44:54] 6.1 Architecture and Mathematical Foundations [02:22:03] 6.2 Core Challenges and Solutions [02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations [02:34:41] 6.4 Research Applications in Transformer Circuit Analysis 7. Feature Learning and Scaling [02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters [03:02:46] 7.2 Scaling Laws and Training Stability [03:11:00] 7.3 Feature Identification and Bias Correction [03:19:52] 7.4 Training Dynamics Analysis Methods 8. Engineering Implementation [03:23:48] 8.1 Scale and Infrastructure Requirements [03:25:20] 8.2 Computational Requirements and Storage [03:35:22] 8.3 Chain-of-Thought Reasoning Implementation [03:37:15] 8.4 Latent Structure Inference in Language Models

Writer's Routine
Steven Veerapen, author of the 'Anthony Blanke' series - Historical fiction author and academic discusses morbid curiosity, sparse writing environments, and Tudor love

Writer's Routine

Play Episode Listen Later Nov 22, 2024 50:48


This week, we chat to the historical fiction author and academic, Steven Veerapen. He's best known for his Anthony Blanke series, set in the Tudor period, about the son of a black trumpeter, John Blanke, who was a real figure in the court of King Henry VIII. There's 'Of Blood Descended' and 'Of Judgement Fallen', which are out in print and just released as audiobooks. He's also written 3 in the 'Simon Danforth' series, and a few about the playwright Christopher Marlowe as a spy.We talk about the balance of writing academia and finding time for novels. Also about the morbid curiosity which gives him ideas, and why we all love the Tudors.You can hear about his sparse writing environment, how he plans a busy year, and what Tudor fiction needs to have in it.Get a copy of the book at uk.bookshop.com/shop/writersroutine@writerspodwritersroutine.com Hosted on Acast. See acast.com/privacy for more information.

The Mutual Audio Network
Dragnet(111124)

The Mutual Audio Network

Play Episode Listen Later Nov 11, 2024 59:57


Re-Imagined Radio celebrates Dragnet, the real-life police procedural, and Jack Webb, as Detective Sgt. Joe Friday, who defined and was defined by this radio series. We sample from One Out of Seven, The Jack Webb Show, Pat Novak, For Hire, Johnny Madero, Pier 23, and Jeff Regan, Investigator, all pre-Dragnet radio shows where Webb honed his character and acting style. We end with "The City Hall Bombing," an early episode of Dragnet to showcase Webb as a great radio storyteller. Significance The Dragnet radio series presented a wide range of topics, each using fast moving plots and realistic details to keep the action moving. The dialogue was understated. Sparse. Influenced by hard-boiled detective literature. The police work was chronicled step-by-step, with details and realism. The result gave millions of listeners a feel for real police work. The boredom and drudgery. The danger of heroism. With its start in radio, and move to television, Dragnet remains one of the most popular and influentional police procedurals in any media, including literature, motion pictures, and podcasts. More than a half-century after its first broadcast, people who have never heard an episode, or don't know Dragnet, know its 4-note music opening, "DUM-DE-DUM-DUM," and think the phrase "Just the facts, ma'am" originated with Sgt. Joe Friday. It didn't. But that doesn't matter.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Late Night with Seth Meyers Podcast
J.B. Smoove | Sad Trump Closes with Lies, Threats, RFK Jr and Complaints About SNL to Sparse Crowds: A Closer Look

Late Night with Seth Meyers Podcast

Play Episode Listen Later Nov 5, 2024 35:38


Seth takes a closer look at an exhausted and despondent Donald Trump closing out his campaign with rambling speeches to dwindling crowds, threats of violence, baseless allegations of cheating, vaccine ban possibilities and complaints about Saturday Night Live.Then, J.B. Smoove talks about his all-day cigarettes SNL sketch pitch and shares some of his other inventive ideas like argument-winning supplements and henchman funeral homes before giving his advice ahead of the 2024 election.Plus, just for this podcast, J.B. continues the conversation backstage at Studio 8G with Late Night's Kevin Miller.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

They Walk Among Us - UK True Crime
Season 9 - Episode 43

They Walk Among Us - UK True Crime

Play Episode Listen Later Nov 3, 2024 54:19


This episode is sponsored by Audible – The Home of True Crime Podcasts. PLEASE LISTEN TO ‘SEASON 9 - EPISODE 42' FOR PART ONE OF THIS TWO-PART CASE. Sparse details of an alleged exorcism emerged at Leeds Crown Court when Michael Taylor was found not guilty by reason of insanity for killing his wife, Christine. In an almost unprecedented move, the coroner decided it would be in the public interest to reopen the inquest so that the full story would be held on record... (Part 2 of 2).*** LISTENER CAUTION IS ADVISED *** This episode was researched and written by Eileen Macfarlane.Edited by Joel Porter at Dot Dot Dot Productions.Script editing, additional writing, illustrations and production direction by Rosanna FittonNarration, additional audio editing, script editing, and production direction by Benjamin Fitton.To get early ad-free access, including Season 1, sign up for They Walk Among PLUS, available from Patreon or Apple Podcasts.More information and episode references can be found on our website https://theywalkamonguspodcast.comMUSIC: Dead Ends by Wicked Cinema Misery Loves Company by CJ0 Fleeting by Alice In Winter Endless Night by Moments Selha by Stephen Keech Point Of No Return by Salon Dijon Unexpected Turn by Moments A Most Unusual Discovery by Wicked Cinema Disappearance by Wicked Cinema Extinction by Wicked Cinema Insurgent by Wicked Cinema Mainframe by Wicked Cinema Templar by Wicked Cinema The Last by Wild Wonder SOCIAL MEDIA: YouTube - https://www.youtube.com/channel/UCeM6RXDKQ3gZbDHaKxvrAyAX - https://twitter.com/TWAU_PodcastFacebook - https://www.facebook.com/theywalkamonguspodcastInstagram - https://www.instagram.com/theywalkamonguspodcastThreads - https://www.threads.net/@theywalkamonguspodcastSupport this show http://supporter.acast.com/theywalkamongus. Hosted on Acast. See acast.com/privacy for more information.

Wade Keller Pro Wrestling Post-shows
AEW DYNAMITE POST-SHOW (9/18): Keller & Dehnel discuss sparse Grand Slam line-up and evaluate the build for Darby-Mox and Danielson-Nigel

Wade Keller Pro Wrestling Post-shows

Play Episode Listen Later Sep 19, 2024 165:59


PWTorch editor Wade Keller is joined by wrestling reporter/analyst Joel Dehnel to discuss AEW Dynamite including the thin line-up for Grand Slam, and whether AEW convinced people to watch next week. Also, reaction to Ricochet's push so far, Chris Jericho vs. Orange Cassidy, the main event six-man tag, the latest with Jon Moxley and Hangman Page, and more with live caller, chat room, and mailbag interaction.Become a supporter of this podcast: https://www.spreaker.com/podcast/wade-keller-pro-wrestling-post-shows--3275545/support.

Green Tagged: Theme Park in 30
Why Halloween Horror Nights 2024 Falls Flat: Budget Cuts & Sparse Scares at Universal Orlando

Green Tagged: Theme Park in 30

Play Episode Listen Later Sep 2, 2024 32:22


Halloween Horror Nights (HHN) kicked off at Universal Studios Orlando this weekend. As the largest Halloween event in the world, HHN is a significant revenue generator for Universal, inspiring similar seasonal offerings at attractions worldwide. However, this year's event falls short of expectations. Could the impending opening of Epic Universe be stretching the team too thin? Or is Universal experimenting with a lower-budget experience to see how it impacts sales? In this video, Scott and Philip break down the highlights and challenges of HHN 2024.

The Nonlinear Library
AF - Showing SAE Latents Are Not Atomic Using Meta-SAEs by Bart Bussmann

The Nonlinear Library

Play Episode Listen Later Aug 24, 2024 35:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Showing SAE Latents Are Not Atomic Using Meta-SAEs, published by Bart Bussmann on August 24, 2024 on The AI Alignment Forum. Bart, Michael and Patrick are joint first authors. Research conducted as part of MATS 6.0 in Lee Sharkey and Neel Nanda's streams. Thanks to Mckenna Fitzgerald and Robert Krzyzanowski for their feedback! TL;DR: Sparse Autoencoder (SAE) latents have been shown to typically be monosemantic (i.e. correspond to an interpretable property of the input). It is sometimes implicitly assumed that they are therefore atomic, i.e. simple, irreducible units that make up the model's computation. We provide evidence against this assumption by finding sparse, interpretable decompositions of SAE decoder directions into seemingly more atomic latents, e.g. Einstein -> science + famous + German + astronomy + energy + starts with E We do this by training meta-SAEs, an SAE trained to reconstruct the decoder directions of a normal SAE. We argue that, conceptually, there's no reason to expect SAE latents to be atomic - when the model is thinking about Albert Einstein, it likely also thinks about Germanness, physicists, etc. Because Einstein always entails those things, the sparsest solution is to have the Albert Einstein latent also boost them. Key results SAE latents can be decomposed into more atomic, interpretable meta-latents. We show that when latents in a larger SAE have split out from latents in a smaller SAE, a meta SAE trained on the larger SAE often recovers this structure. We demonstrate that meta-latents allow for more precise causal interventions on model behavior than SAE latents on a targeted knowledge editing task. We believe that the alternate, interpretable decomposition using MetaSAEs casts doubt on the implicit assumption that SAE latents are atomic. We show preliminary results that MetaSAE latents have significant ovelap with latents in a normal SAE of the same size but may relate differently to the larger SAEs used in MetaSAE training. We made a dashboard that lets you explore meta-SAE latents. Terminology: Throughout this post we use "latents" to describe the concrete components of the SAE's dictionary, whereas "feature" refers to the abstract concepts, following Lieberum et al. Introduction Mechanistic interpretability (mech interp) attempts to understand neural networks by breaking down their computation into interpretable components. One of the key challenges of this line of research is the polysemanticity of neurons, meaning they respond to seemingly unrelated inputs. Sparse autoencoders (SAEs) have been proposed as a method for decomposing model activations into sparse linear sums of latents. Ideally, these latents should be monosemantic i.e. respond to inputs that clearly share a similar meaning (implicitly, from the perspective of a human interpreter). That is, a human should be able to reason about the latents both in relation to the features to which they are associated, and also use the latents to better understand the model's overall behavior. There is a popular notion, both implicitly in related work on SAEs within mech interp and explicitly by the use of the term "atom" in sparse dictionary learning as a whole, that SAE features are atomic or can be "true features". However, monosemanticity does not imply atomicity. Consider the example of shapes of different colors - the set of shapes is [circle, triangle, square], and the set of colors is [white, red, green, black], each of which is represented with a linear direction. 'Red triangle' represents a monosemantic feature, but not an atomic feature, as it can be decomposed into red and triangle. It has been shown that sufficiently wide SAEs on toy models will learn 'red triangle', rather than representing 'red' and 'triangle' with separate latents. Furthermore, whilst one may naively re...

The Nonlinear Library
LW - Case Study: Interpreting, Manipulating, and Controlling CLIP With Sparse Autoencoders by Gytis Daujotas

The Nonlinear Library

Play Episode Listen Later Aug 5, 2024 13:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Case Study: Interpreting, Manipulating, and Controlling CLIP With Sparse Autoencoders, published by Gytis Daujotas on August 5, 2024 on LessWrong. Click here to open a live research preview where you can try interventions using this SAE. This is a follow-up to a previous post on finding interpretable and steerable features in CLIP. Motivation Modern image diffusion models often use CLIP in order to condition generation. Put simply, users use CLIP to embed prompts or images, and these embeddings are used to diffuse another image back out. Despite this, image models have severe user interface limitations. We already know that CLIP has a rich inner world model, but it's often surprisingly hard to make precise tweaks or reference specific concepts just by prompting alone. Similar prompts often yield a different image, or when we have a specific idea in mind, it can be too hard to find the right string of words to elicit the right concepts we need. If we're able to understand the internal representation that CLIP uses to encode information about images, we might be able to get more expressive tools and mechanisms to guide generation and steer it without using any prompting. In the ideal world, this would enable the ability to make fine adjustments or even reference particular aspects of style or content without needing to specify what we want in language. We could instead leverage CLIP's internal understanding to pick and choose what concepts to include, like a palette or a digital synthesizer. It would also enable us to learn something about how image models represent the world, and how humans can interact with and use this representation, thereby skipping the text encoder and manipulating the model's internal state directly. Introduction CLIP is a neural network commonly used to guide image diffusion. A Sparse Autoencoder was trained on the dense image embeddings CLIP produces to transform it into a sparse representation of active features. These features seem to represent individual units of meaning. They can also be manipulated in groups - combinations of multiple active features - that represent intuitive concepts. These groups can be understood entirely visually, and often encode surprisingly rich and interesting conceptual detail. By directly manipulating these groups as single units, image generation can be edited and guided without using prompting or language input. Concepts that were difficult to specify or edit by text prompting become easy and intuitive to manipulate in this new visual representation. Since many models use the same CLIP joint representation space that this work analyzed, this technique works to control many popular image models out of the box. Summary of Results Any arbitrary image can be decomposed into its constituent concepts. Many concepts (groups of features) that we find seem to slice images up into a fairly natural ontology of their human interpretable components. We find grouping them together is an effective approach to yield a more interpretable and useful grain of control. These concepts can be used like knobs to steer generation in leading models like Stable Cascade. Many concepts have an obvious visual meaning yet are hard to precisely label in language, which suggests that studying CLIP's internal representations can be used as a lens into the variety of the visual domain. Tweaking the activations of these concepts can be used to expressively steer and guide generation in multiple image diffusion models that we tried. We released the weights and a live demo of controlling image generation in feature space. By analyzing a SAE trained on CLIP, we get a much more vivid picture of the rich understanding that CLIP learns. We hope this is just the beginning of more effective and useful interventions in the internal representations of n...

The Nonlinear Library
LW - Open Source Automated Interpretability for Sparse Autoencoder Features by kh4dien

The Nonlinear Library

Play Episode Listen Later Jul 31, 2024 22:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Source Automated Interpretability for Sparse Autoencoder Features, published by kh4dien on July 31, 2024 on LessWrong. Background Sparse autoencoders recover a diversity of interpretable, monosemantic features, but present an intractable problem of scale to human labelers. We investigate different techniques for generating and scoring text explanations of SAE features. Key Findings Open source models generate and evaluate text explanations of SAE features reasonably well, albeit somewhat worse than closed models like Claude 3.5 Sonnet. Explanations found by LLMs are similar to explanations found by humans. Automatically interpreting 1.5M features of GPT-2 with the current pipeline would cost $1300 in API calls to Llama 3.1 or $8500 with Claude 3.5 Sonnet. Prior methods cost ~$200k with Claude. Code can be found at https://github.com/EleutherAI/sae-auto-interp. We built a small dashboard to explore explanations and their scores: https://cadentj.github.io/demo/ Generating Explanations Sparse autoencoders decompose activations into a sum of sparse feature directions. We leverage language models to generate explanations for activating text examples. Prior work prompts language models with token sequences that activate MLP neurons (Bills et al. 2023), by showing the model a list of tokens followed by their respective activations, separated by a tab, and listed one per line. We instead highlight max activating tokens in each example with a set of . Optionally, we choose a threshold of the example's max activation for which tokens are highlighted. This helps the model distinguish important information for some densely activating features. We experiment with several methods for augmenting the explanation. Full prompts are available here. Chain of thought improves general reasoning capabilities in language models. We few-shot the model with several examples of a thought process that mimics a human approach to generating explanations. We expect that verbalizing thought might capture richer relations between tokens and context. Activations distinguish which sentences are more representative of a feature. We provide the magnitude of activating tokens after each example. We compute the logit weights for each feature through the path expansion where is the model unembed and is the decoder direction for a specific feature. The top promoted tokens capture a feature's causal effects which are useful for sharpening explanations. This method is equivalent to the logit lens (nostalgebraist 2020); future work might apply variants that reveal other causal information (Belrose et al. 2023; Gandelsman et al. 2024). Scoring explanations Text explanations represent interpretable "concepts" in natural language. How do we evaluate the faithfulness of explanations to the concepts actually contained in SAE features? We view the explanation as a classifier which predicts whether a feature is present in a context. An explanation should have high recall - identifying most activating text - as well as high precision - distinguishing between activating and non-activating text. Consider a feature which activates on the word "stop" after "don't" or "won't" (Gao et al. 2024). There are two failure modes: 1. The explanation could be too broad, identifying the feature as activating on the word "stop". It would have high recall on held out text, but low precision. 2. The explanation could be too narrow, stating the feature activates on the word "stop" only after "don't". This would have high precision, but low recall. One approach to scoring explanations is "simulation scoring"(Bills et al. 2023) which uses a language model to assign an activation to each token in a text, then measures the correlation between predicted and real activations. This method is biased toward recall; given a bro...

The Effortless Podcast
History of AI - EP06 Part 2: The Effortless Podcast

The Effortless Podcast

Play Episode Listen Later Jul 29, 2024 70:46


Key Topics & Chapter Markers:Recap from Part 1: The Early Years of AI [00:00:00]AI Architecture & Oracle's Innovation in Hash Joins [00:02:00]Impact of Nature in Creative and Collaborative Work [00:05:00]The Rise of Neural Networks: Language and Image Processing [00:10:00]Sparse and Dense Vectors Explained [00:15:00]Google Translate's Early Approaches & Statistical Methods [00:20:00]TensorFlow vs. PyTorch: Defining the Modern AI Framework [00:30:00]Dot Products, Similarity, and the Concept of Attention [00:35:00]Transformers & The Attention Mechanism Revolution [00:42:00]BERT, GPT, and the Dawn of Transfer Learning [01:00:00]The Road to ChatGPT and OpenAI's Innovations [01:10:00]The Future of AI and Computational Scaling [01:15:00]Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com

The Nonlinear Library
LW - Efficient Dictionary Learning with Switch Sparse Autoencoders by Anish Mudide

The Nonlinear Library

Play Episode Listen Later Jul 22, 2024 20:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Efficient Dictionary Learning with Switch Sparse Autoencoders, published by Anish Mudide on July 22, 2024 on LessWrong. Produced as part of the ML Alignment & Theory Scholars Program - Summer 2024 Cohort 0. Summary To recover all the relevant features from a superintelligent language model, we will likely need to scale sparse autoencoders (SAEs) to billions of features. Using current architectures, training extremely wide SAEs across multiple layers and sublayers at various sparsity levels is computationally intractable. Conditional computation has been used to scale transformers (Fedus et al.) to trillions of parameters while retaining computational efficiency. We introduce the Switch SAE, a novel architecture that leverages conditional computation to efficiently scale SAEs to many more features. 1. Introduction The internal computations of large language models are inscrutable to humans. We can observe the inputs and the outputs, as well as every intermediate step in between, and yet, we have little to no sense of what the model is actually doing. For example, is the model inserting security vulnerabilities or backdoors into the code that it writes? Is the model lying, deceiving or seeking power? Deploying a superintelligent model into the real world without being aware of when these dangerous capabilities may arise leaves humanity vulnerable. Mechanistic interpretability (Olah et al.) aims to open the black-box of neural networks and rigorously explain the underlying computations. Early attempts to identify the behavior of individual neurons were thwarted by polysemanticity, the phenomenon in which a single neuron is activated by several unrelated features (Olah et al.). Language models must pack an extremely vast amount of information (e.g., the entire internet) within a limited capacity, encouraging the model to rely on superposition to represent many more features than there are dimensions in the model state (Elhage et al.). Sharkey et al. and Cunningham et al. propose to disentangle superimposed model representations into monosemantic, cleanly interpretable features by training unsupervised sparse autoencoders (SAEs) on intermediate language model activations. Recent work (Templeton et al., Gao et al.) has focused on scaling sparse autoencoders to frontier language models such as Claude 3 Sonnet and GPT-4. Despite scaling SAEs to 34 million features, Templeton et al. estimate that they are likely orders of magnitude short of capturing all features. Furthermore, Gao et al. train SAEs on a series of language models and find that larger models require more features to achieve the same reconstruction error. Thus, to capture all relevant features of future large, superintelligent models, we will likely need to scale SAEs to several billions of features. With current methodologies, training SAEs with billions of features at various layers, sublayers and sparsity levels is computationally infeasible. Training a sparse autoencoder generally consists of six major computations: the encoder forward pass, the encoder gradient, the decoder forward pass, the decoder gradient, the latent gradient and the pre-bias gradient. Gao et al. introduce kernels and tricks that leverage the sparsity of the TopK activation function to dramatically optimize all computations excluding the encoder forward pass, which is not (yet) sparse. After implementing these optimizations, Gao et al. attribute the majority of the compute to the dense encoder forward pass and the majority of the memory to the latent pre-activations. No work has attempted to accelerate or improve the memory efficiency of the encoder forward pass, which remains the sole dense matrix multiplication. In a standard deep learning model, every parameter is used for every input. An alternative approach is conditional computatio...

The Nonlinear Library
AF - Decomposing the QK circuit with Bilinear Sparse Dictionary Learning by keith wynroe

The Nonlinear Library

Play Episode Listen Later Jul 2, 2024 21:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Decomposing the QK circuit with Bilinear Sparse Dictionary Learning, published by keith wynroe on July 2, 2024 on The AI Alignment Forum. This work was produced as part of Lee Sharkey's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort Intro and Motivation Sparse dictionary learning (SDL) has attracted a lot of attention recently as a method for interpreting transformer activations. They demonstrate that model activations can often be explained using a sparsely-activating, overcomplete set of human-interpretable directions. However, despite its success for explaining many components, applying SDL to interpretability is relatively nascent and have yet to be applied to some model activations. In particular, intermediate activations of attention blocks have yet to be studied, and provide challenges for standard SDL methods. The first challenge is bilinearity: SDL is usually applied to individual vector spaces at individual layers, so we can simply identify features as a direction in activation space. But the QK circuits of transformer attention layers are different: They involve a bilinear form followed by a softmax. Although simply applying sparse encoders to the keys and queries[1] could certainly help us understand the "concepts" being used by a given attention layer, this approach would fail to explain how the query-features and key-features interact bilinearly. We need to understand which keys matter to which queries. The second challenge is attention-irrelevant variance: A lot of the variance in the attention scores is irrelevant to the attention pattern because it is variance in low scores which are softmaxed to zero; this means that most of the variability in the keys and queries is irrelevant for explaining downstream behaviour[2]. The standard method of reconstructing keys and queries would therefore waste capacity on what is effectively functionally irrelevant noise. To tackle these two problems (bilinearity and attention-irrelevant variance), we propose a training setup which only reconstructs the dimensions of the keys and queries that most affect the attention pattern. Training Setup Our training process has two steps: Step 1: Reconstructing the attention pattern with key- and query- encoder-decoder networks Step 2: Finding a condensed set of query-key feature pairs by masking Step 1: Reconstructing the attention pattern with key- and query-transcoders Architecture Our first training step involves training two sparse dictionaries in parallel (one for the keys and one for the queries). The dictionaries both take in the layer-normalized residual stream at a given layer (normalised_resid_pre_i) and each output a [n_head * d_head] vector, representing the flattened keys and queries[3]. Figure 1: High-level diagram of our training set-up Loss functions However, rather than penalising the reconstruction loss of the keys and queries explicitly, we can use these keys and queries to reconstruct the original model's attention pattern. To train the reconstructed attention pattern, we used several different losses: KL divergence between the attention pattern (using reconstructed keys and reconstructed queries) and the ground-truth attention pattern produced by the original model. We also added two auxiliary reconstruction losses both for early-training-run stability, and to ensure our transcoders do not learn to reconstruct the keys and queries with an arbitrary rotation applied (since this would still produce the same attention scores and patterns): KL divergence between the attention pattern (using reconstructed keys and the original model's queries) and the ground-truth attention pattern produced by the original model. KL divergence between the attention pattern (using the original models' keys and the reconstructed queries) and the groun...

The Nonlinear Library
LW - Decomposing the QK circuit with Bilinear Sparse Dictionary Learning by keith wynroe

The Nonlinear Library

Play Episode Listen Later Jul 2, 2024 21:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Decomposing the QK circuit with Bilinear Sparse Dictionary Learning, published by keith wynroe on July 2, 2024 on LessWrong. This work was produced as part of Lee Sharkey's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort Intro and Motivation Sparse dictionary learning (SDL) has attracted a lot of attention recently as a method for interpreting transformer activations. They demonstrate that model activations can often be explained using a sparsely-activating, overcomplete set of human-interpretable directions. However, despite its success for explaining many components, applying SDL to interpretability is relatively nascent and have yet to be applied to some model activations. In particular, intermediate activations of attention blocks have yet to be studied, and provide challenges for standard SDL methods. The first challenge is bilinearity: SDL is usually applied to individual vector spaces at individual layers, so we can simply identify features as a direction in activation space. But the QK circuits of transformer attention layers are different: They involve a bilinear form followed by a softmax. Although simply applying sparse encoders to the keys and queries[1] could certainly help us understand the "concepts" being used by a given attention layer, this approach would fail to explain how the query-features and key-features interact bilinearly. We need to understand which keys matter to which queries. The second challenge is attention-irrelevant variance: A lot of the variance in the attention scores is irrelevant to the attention pattern because it is variance in low scores which are softmaxed to zero; this means that most of the variability in the keys and queries is irrelevant for explaining downstream behaviour[2]. The standard method of reconstructing keys and queries would therefore waste capacity on what is effectively functionally irrelevant noise. To tackle these two problems (bilinearity and attention-irrelevant variance), we propose a training setup which only reconstructs the dimensions of the keys and queries that most affect the attention pattern. Training Setup Our training process has two steps: Step 1: Reconstructing the attention pattern with key- and query- encoder-decoder networks Step 2: Finding a condensed set of query-key feature pairs by masking Step 1: Reconstructing the attention pattern with key- and query-transcoders Architecture Our first training step involves training two sparse dictionaries in parallel (one for the keys and one for the queries). The dictionaries both take in the layer-normalized residual stream at a given layer (normalised_resid_pre_i) and each output a [n_head * d_head] vector, representing the flattened keys and queries[3]. Figure 1: High-level diagram of our training set-up Loss functions However, rather than penalising the reconstruction loss of the keys and queries explicitly, we can use these keys and queries to reconstruct the original model's attention pattern. To train the reconstructed attention pattern, we used several different losses: KL divergence between the attention pattern (using reconstructed keys and reconstructed queries) and the ground-truth attention pattern produced by the original model. We also added two auxiliary reconstruction losses both for early-training-run stability, and to ensure our transcoders do not learn to reconstruct the keys and queries with an arbitrary rotation applied (since this would still produce the same attention scores and patterns): KL divergence between the attention pattern (using reconstructed keys and the original model's queries) and the ground-truth attention pattern produced by the original model. KL divergence between the attention pattern (using the original models' keys and the reconstructed queries) and the ground-truth atten...

The Nonlinear Library
AF - Interpreting Preference Models w/ Sparse Autoencoders by Logan Riggs Smith

The Nonlinear Library

Play Episode Listen Later Jul 1, 2024 15:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interpreting Preference Models w/ Sparse Autoencoders, published by Logan Riggs Smith on July 1, 2024 on The AI Alignment Forum. Preference Models (PMs) are trained to imitate human preferences and are used when training with RLHF (reinforcement learning from human feedback); however, we don't know what features the PM is using when outputting reward. For example, maybe curse words make the reward go down and wedding-related words make it go up. It would be good to verify that the features we wanted to instill in the PM (e.g. helpfulness, harmlessness, honesty) are actually rewarded and those we don't (e.g. deception, sycophancey) aren't. Sparse Autoencoders (SAEs) have been used to decompose intermediate layers in models into interpretable feature. Here we train SAEs on a 7B parameter PM, and find the features that are most responsible for the reward going up & down. High level takeaways: 1. We're able to find SAE features that have a large causal effect on reward which can be used to "jail break" prompts. 2. We do not explain 100% of reward differences through SAE features even though we tried for a couple hours. What are PMs? [skip if you're already familiar] When talking to a chatbot, it can output several different responses, and you can choose which one you believe is better. We can then train the LLM on this feedback for every output, but humans are too slow. So we'll just get, say, 100k human preferences of "response A is better than response B", and train another AI to predict human preferences! But to take in text & output a reward, a PM would benefit from understanding language. So one typically trains a PM by first taking an already pretrained model (e.g. GPT-3), and replacing the last component of the LLM of shape [d_model, vocab_size], which converts the residual stream to 50k numbers for the probability of each word in its vocabulary, to [d_model, 1] which converts it to 1 number which represents reward. They then call this pretrained model w/ this new "head" a "Preference Model", and train it to predict the human-preference dataset. Did it give the human preferred response [A] a higher number than [B]? Good. If not, bad! This leads to two important points: 1. Reward is relative - the PM is only trained to say the human preferred response is better than the alternative. So a large negative reward or large positive reward don't have objective meaning. All that matters is the relative reward difference for two completions given the same prompt. 1. (h/t to Ethan Perez's post) 2. Most features are already learned in pretraining - the PM isn't learning new features from scratch. It's taking advantage of the pretrained model's existing concepts. These features might change a bit or compose w/ each other differently though. 1. Note: this an unsubstantiated hypothesis of mine. Finding High Reward-affecting Features w/ SAEs We trained 6 SAEs on layers 2,8,12,14,16,20 of an open source 7B parameter PM, finding 32k features for each layer. We then find the most important features for the reward going up or down (specifics in Technical Details section). Below is a selection of features found through this process that we thought were interesting enough to try to create prompts w/. (My list of feature interpretations for each layer can be found here) Negative Features A "negative" feature is a feature that will decrease the reward that the PM predicts. This could include features like cursing or saying the same word repeatedly. Therefore, we should expect that removing a negative feature makes the reward go up I don't know When looking at a feature, I'll look at the top datapoints that removing it affected the reward the most: Removing feature 11612 made the chosen reward go up by 1.2 from 4.79->6.02, and had no effect on the rejected completion because it doesn't a...

Deep Papers
LLM Interpretability and Sparse Autoencoders: Research from OpenAI and Anthropic

Deep Papers

Play Episode Listen Later Jun 14, 2024 44:00


It's been an exciting couple weeks for GenAI! Join us as we discuss the latest research from OpenAI and Anthropic. We're excited to chat about this significant step forward in understanding how LLMs work and the implications it has for deeper understanding of the neural activity of language models. We take a closer look at some recent research from both OpenAI and Anthropic. These two recent papers both focus on the sparse autoencoder--an unsupervised approach for extracting interpretable features from an LLM.  In "Extracting Concepts from GPT-4," OpenAI researchers propose using k-sparse autoencoders to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. In "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet," researchers at Anthropic show that scaling laws can be used to guide the training of sparse autoencoders, among other findings. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

Investing in AI for Hard Tech, with Eric Vishria of Benchmark and Sergiy Nesterenko of Quilter

Play Episode Listen Later Jun 13, 2024 54:43


Dive into the world of AI investments with Eric Vishria of Benchmark and Sergiy Nesterenko of Quilter. Explore the future of AI in hardware design, the strategies for venture capital investment in the AI era, and the impact on society. Discover why Benchmark has yet to invest in foundation model companies and the significance of solving enduring problems in this dynamic field. Join us for an eye-opening discussion on the intersection of AI technology and business innovation. SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Recommended Podcast - The Riff with Byrne Hobart Byrne Hobart, the writer of The Diff, is revered in Silicon Valley. You can get an hour with him each week. See for yourself how his thinking can upgrade yours. Spotify: https://open.spotify.com/show/6rANlV54GCARLgMOtpkzKt Apple: https://podcasts.apple.com/us/podcast/the-riff-with-byrne-hobart-and-erik-torenberg/id1716646486 CHAPTERS: (00:00:00) Introduction (00:10:12) The Idea Maze (00:12:28) Disruptive Approach (00:15:47) Sparse reward problem (00:18:26) Sponsors: Oracle | Brave (00:20:34) Reliability of the reward signal (00:28:12) Model size and compute (00:30:14) Simulation methods (00:35:48) Superhuman circuit board design (00:38:53) Sponsors: Squad | Omneky (00:40:38) What does the future of circuit board design look like? (00:43:11) How do I make money in AI? (00:46:18) What is cutting edge? (00:48:34) Researchers vs. engineers (00:50:51) Call for startups

Building The Future Show - Radio / TV / Podcast
Ep. 567 w/ Brian Stevens CEO at Neural Magic

Building The Future Show - Radio / TV / Podcast

Play Episode Listen Later Apr 23, 2024 46:33 Transcription Available


Together with our community, we engineer sparse LLM, CV, and NLP models that are more efficient and performant in production. Why does this matter? Sparse models are more flexible and can achieve unrivaled latency and throughput performance on your private CPU and GPU infrastructure. Check us out on GitHub and join the Neural Magic Slack Community to get started with software-delivered AI.http://neuralmagic.com/

Locked On Fantasy Basketball
NBA Fantasy Basketball: Navigating Super Bowl's Sparse Schedule

Locked On Fantasy Basketball

Play Episode Listen Later Feb 10, 2024 21:56


Josh Lloyd delves into the nuances of a quieter NBA schedule on Super Bowl Sunday, pinpointing the potential impact of just two games on the day's fantasy basketball landscape. He'll dissect the significance of Kevin Huerter, Lu Dort, and Jaime Jaquez within this limited lineup. Tune in to the Locked On Fantasy Basketball Podcast, powered by Basketball Monster, for expert insights on making the most of this unique NBA slate.Vote for my partner to win the Changemaker Award https://www.wishpond.com/lp/2780526/entries/204585428Support Us By Supporting Our Sponsors!NissanOur friends at Nissan have a lineup of SUV's with the capabilities to take your adventure to the next level. Take the Nissan Rogue, Nissan Pathfinder, or Nissan Armada and go find your next big adventure. Shop NissanUSA.com.RobinhoodRobinhood has the only IRA that gives you a 3% boost on every dollar you contribute when you subscribe to Robinhood Gold. Now through April 30th, Robinhood is even boosting every single dollar you transfer in from other retirement accounts with a 3% match. Available to U.S. customers in good standing. Robinhood Financial LLC (member SIPC), is a registered broker dealer.LinkedInLinkedIn Jobs helps you find the qualified candidates you want to talk to, faster. Post your job for free at LinkedIn.com/LOCKEDONNBA. Terms and conditions apply.eBay MotorsFor parts that fit, head to eBay Motors and look for the green check. Stay in the game with eBay Guaranteed Fit at eBayMotos.com. Let's ride. eBay Guaranteed Fit only available to US customers. Eligible items only. Exclusions apply.BetterHelpThis episode is sponsored by BetterHelp. Make your brain your friend, with BetterHelp. Visit BetterHelp.com/LOCKEDONNBA today to get 10% off your first month.PrizePicksGo to PrizePicks.com/lockedonnba and use code lockedonnba for a first deposit match up to $100!GametimeDownload the Gametime app, create an account, and use code LOCKEDON for $20 off your first purchase.FanDuelGet buckets with your first bet on FanDuel, America's Number One Sportsbook. Right now, NEW customers get ONE HUNDRED AND FIFTY DOLLARS in BONUS BETS with any winning FIVE DOLLAR BET! That's A HUNDRED AND FIFTY BUCKS – if your bet wins! Visit FanDuel.com/LOCKEDON to get started.FANDUEL DISCLAIMER: 21+ in select states. First online real money wager only. Bonus issued as nonwithdrawable free bets that expires in 14 days. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling Problem? Call 1-800-GAMBLER or visit FanDuel.com/RG (CO, IA, MD, MI, NJ, PA, IL, VA, WV), 1-800-NEXT-STEP or text NEXTSTEP to 53342 (AZ), 1-888-789-7777 or visit ccpg.org/chat (CT), 1-800-9-WITH-IT (IN), 1-800-522-4700 (WY, KS) or visit ksgamblinghelp.com (KS), 1-877-770-STOP (LA), 1-877-8-HOPENY or text HOPENY (467369) (NY), TN REDLINE 1-800-889-9789 (TN)Intro Music by Ben LloydTikTokInstagram Learn more about your ad choices. Visit podcastchoices.com/adchoices

Locked On Fantasy Basketball
NBA Fantasy Basketball: Navigating Super Bowl's Sparse Schedule

Locked On Fantasy Basketball

Play Episode Listen Later Feb 10, 2024 26:41


Josh Lloyd delves into the nuances of a quieter NBA schedule on Super Bowl Sunday, pinpointing the potential impact of just two games on the day's fantasy basketball landscape. He'll dissect the significance of Kevin Huerter, Lu Dort, and Jaime Jaquez within this limited lineup. Tune in to the Locked On Fantasy Basketball Podcast, powered by Basketball Monster, for expert insights on making the most of this unique NBA slate. Vote for my partner to win the Changemaker Award https://www.wishpond.com/lp/2780526/entries/204585428 Support Us By Supporting Our Sponsors! Nissan Our friends at Nissan have a lineup of SUV's with the capabilities to take your adventure to the next level. Take the Nissan Rogue, Nissan Pathfinder, or Nissan Armada and go find your next big adventure. Shop NissanUSA.com. Robinhood Robinhood has the only IRA that gives you a 3% boost on every dollar you contribute when you subscribe to Robinhood Gold. Now through April 30th, Robinhood is even boosting every single dollar you transfer in from other retirement accounts with a 3% match. Available to U.S. customers in good standing. Robinhood Financial LLC (member SIPC), is a registered broker dealer. LinkedIn LinkedIn Jobs helps you find the qualified candidates you want to talk to, faster. Post your job for free at LinkedIn.com/LOCKEDONNBA. Terms and conditions apply. eBay Motors For parts that fit, head to eBay Motors and look for the green check. Stay in the game with eBay Guaranteed Fit at eBayMotos.com. Let's ride. eBay Guaranteed Fit only available to US customers. Eligible items only. Exclusions apply. BetterHelp This episode is sponsored by BetterHelp. Make your brain your friend, with BetterHelp. Visit BetterHelp.com/LOCKEDONNBA today to get 10% off your first month. PrizePicks Go to PrizePicks.com/lockedonnba and use code lockedonnba for a first deposit match up to $100! Gametime Download the Gametime app, create an account, and use code LOCKEDON for $20 off your first purchase. FanDuel Get buckets with your first bet on FanDuel, America's Number One Sportsbook. Right now, NEW customers get ONE HUNDRED AND FIFTY DOLLARS in BONUS BETS with any winning FIVE DOLLAR BET! That's A HUNDRED AND FIFTY BUCKS – if your bet wins! Visit FanDuel.com/LOCKEDON to get started. FANDUEL DISCLAIMER: 21+ in select states. First online real money wager only. Bonus issued as nonwithdrawable free bets that expires in 14 days. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling Problem? Call 1-800-GAMBLER or visit FanDuel.com/RG (CO, IA, MD, MI, NJ, PA, IL, VA, WV), 1-800-NEXT-STEP or text NEXTSTEP to 53342 (AZ), 1-888-789-7777 or visit ccpg.org/chat (CT), 1-800-9-WITH-IT (IN), 1-800-522-4700 (WY, KS) or visit ksgamblinghelp.com (KS), 1-877-770-STOP (LA), 1-877-8-HOPENY or text HOPENY (467369) (NY), TN REDLINE 1-800-889-9789 (TN) Intro Music by Ben Lloyd TikTok Instagram Learn more about your ad choices. Visit podcastchoices.com/adchoices