POPULARITY
Categories
BE WARNED: It's LuAnna, and this podcast contains honest, upfront opinions, rants, bants and general explicit content. But you know you love it.On this week's LuAnna: Becs gets her first bit of hate, Anna's being a fart-hypocrite in Tenerife, Lu's being "Miss Markle" (read Marple) in Dubai, there's a phallic icecream, a story about giving a blowie to Darth Vader and we talk about if politics is worth losing a friend over.Plus, we deep dive into the case of Lucy Letby after the Netflix documentary recently came out, we crown our next weirdo of the week, Imo has a small pop at the listeners andGRAB YOUR TICKETS FOR THE BIG PARTY AT EVERYTHINGLUANNA.COMRemember, if you want to get in touch you can: Email us at luanna@everythingluanna.com OR drop us a WhatsApp on our brand new number 075 215 64640Please review Global's Privacy Policy: https://global.com/legal/privacy-policy/
As part of the Move 26 in ’26 Challenge, we’re sharing a listen-along episode to pair with your 26 minutes of daily movement. Michelle Obama and her brother Craig Robinson discuss the role movement plays in a happier life and share favorite clips from podcasts like IMO and The Second Opinion with Dr. Sharon that explore how movement, discipline, and mental health shape who we become. Great listening for a walk, a workout, or any way you like to move. Resources & links related to this episode: Get in touch: podcast@gretchenrubin.com Visit Gretchen's website to learn more about Gretchen's best-selling books, products from The Happiness Project Collection, and the Happier app. Find the transcript for this episode on the episode details page in the Apple Podcasts app. See omnystudio.com/listener for privacy information.
Hot flashes aren't the whole story. Perimenopause and menopause can impact your gut, hormones, and chronic illness symptoms - you're not imagining it. Listen to this episode of The Gut Show as we talk with Casey Farlow about what menopause is, how to get support, and more! In this episode, we cover: Perimenopause and menopause [3:20] Introducing our guest [4:40] What is menopause? [6:01] Changes to gut health [8:53] Other symptoms [11:14] Monitoring estrogen and progesterone [13:53] Birth control [15:41] Can you stabilize hormones? [17:54] Hormone therapy & breast cancer [20:23] Is it hopeless? [21:30] Chronic illness & things getting worse [26:35] Hormone therapy and breast cancer [30:12] Who monitors this? [32:18] Labwork [35:20] Bone density screening [38:11] Mentioned in this episode: MASTER Method Membership FREE IBS Warrior Summit Take the quiz: What's your poop personality? About our guest: Casey Farlow, MPH, RDN is a registered dietitian and nationally recognized perimenopause nutrition expert who helps women stop fighting their bodies and start working with them during the hormonal transition of perimenopause. As the founder of The Perimenopause Nutritionist, Casey supports women struggling with stubborn weight gain, fatigue, sleep disruption, mood changes, and food frustration through hormone-aware nutrition, blood sugar regulation, and nervous system support. Connect with Casey Thank you to our partners: ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
On this episode of IMO, we're doing things a little differently! No guest, just your questions. Topics include: what Michelle and Craig learned about each other by doing the podcast together, their thoughts on open relationships, and the last time they cried.Have a question you want answered? Write to us at imopod.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Hot honey on Saint Louis style pizza? Yeah… it hits. Imo's teamed up with Mike's Hot Honey for the Honey Trap — cupping pepperoni, that sweet + heat drizzle, and just enough kick to wake you up. It's bold, it's a little dangerous, and it might be the most talked-about collab they've done. We sat down with Imo's Director of Marketing to talk how it happened, why it works, and what it means for the future of Saint Louis pizza. Full episode streaming now. Go listen at www.mostlysuperheroes.com or watch on YouTube. #Imos #MikesHotHoney #SaintLouisStyle #STLFood #PizzaSeason
Kungiyar IPOB da ke fafutukar neman ɓallewar Yankin Kudu Maso Gabashin ƙasar daga Najeriya, ta sanar da kawo ƙarshen tilasta wa jama'a zaman gida a kowace ranar litinin yau kusan shekaru biyar kenan. Matakin dakatar da tilasta zama gidan da ƙungiyar ta sanar a jiya lahadi, ya zo ne bayan da gwamnan jihar Anambra ya buƙaci jama'a su bijire sannan su ci gaba fita kowace litinin domin gudanar da harkokinsu na yau da kullum. Alhaji Ibrahim Abdulƙadir Ɗan-Ghali shi ne jagoran ƙawancen ƙungiyoyin ƴankasuwa ƴan asalin Arewacin Najeriya mazauna jihar Imo, ya bayyana wa Abdoulkarim Ibrahim Shikal matsayinsu game da sanarwar ta IPOB. Latsa alamar sauti domin sauraron cikakkiyar tattaunawar.
Toxins, chemicals, environmental exposure... How much is too much, how much should we worry, who should be concerned? The goal isn't to be afraid, but to understand how this fits into IBS management - listen to this episode of The Gut Show to learn more about TILT theory without going down a fear-based rabbit hole. Mentioned in this episode: MASTER Method Membership FREE IBS Warrior Summit Take the quiz: What's your poop personality? MCAS episode Thank you to our partners: mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
Friday LIVE! returns with giveaways, breaking movie news, and our biggest 2026 film breakdown yet. We recap early screenings of: Pillion (4/4 – cinema perfection?) Scarlet (IMAX anime fantasy) Shelter (Jason Statham action thriller) Mercy (Chris Pratt in dystopian AI thriller) 28 Years Later: The Bone Temple Marty Supreme (Timothée Chalamet sports drama) Anaconda (Paul Rudd & Jack Black reboot) Primate (horror slasher with practical effects) Plus: Marvel Doomsday countdown Entire MCU explained in 30 minutes Dark Knight screening social (Feb 23, Alamo STL) 2 Rivers Comic Con contest Mental health resources (988 hotline + Provident Behavioral Health) 00:00:16 – Friday LIVE Kickoff + Prize Wheel Rules 00:02:08 – Dark Knight One-Night Screening (Feb 23) 00:03:25 – How to Enter Contests + Call/Text Line 00:04:01 – BRADENSTL Food Festivals 2026 00:07:00 – IMO's Pizza + Mike's Hot Honey 00:08:09 – Marvel Doomsday Countdown Begins 00:08:27 – MCU in 30 Minutes Explained 00:09:52 – Pillion Review (4/4) 00:13:05 – Scarlet IMAX Review (3.3/4) 00:16:23 – Shelter (Jason Statham) Review 00:19:52 – Mercy IMAX 3D Review (Minority Report Vibes) 00:22:25 – 28 Years Later: The Bone Temple (3.9/4) 00:25:02 – Marty Supreme (Shalamet Breakout) 00:28:06 – Anaconda Comedy Reboot 00:30:00 – Primate Horror Review 00:33:02 – 2026 Screening Schedule + Avengers Doomsday 00:33:45 – Mental Health Spotlight (988 + Provident) 00:35:23 – Patreon + Ad-Free Support Call or text your movie reviews: 754-CALL-LOG Support the show: mostlysuperheroes.com/support
What happens when addiction, loss, and uncertainty collide with discipline, honesty, and trust. In this episode, I sit down with David Price, a visionary CEO who shares his journey from growing up with addicted parents and battling his own drug addiction to building a multi-million-dollar insurance organization in less than a year. David opens up about hitting bottom, finding clarity through recovery, and learning how mindset, patience, and consistency reshaped his life and business. We explore what it really takes to build trust, lead people well, and stay focused when growth feels uncomfortable. This conversation is about resilience, personal responsibility, and why an Unstoppable mindset is built one honest decision at a time. Highlights: 00:10 – Hear how David Price's early life with addicted parents shaped his resilience and stress tolerance03:18 – Learn how growing up unstable planted the seed for David's drive to become a business owner05:01 – Discover the moment David realized addiction was no longer something he could manage alone15:51 – Hear the unexpected reason David walked into a recovery meeting that changed everything24:16 – Learn how small, achievable habits helped David rebuild his life after getting clean37:50 – Understand the hard business lesson David learned after choosing the wrong partner44:34 – Hear how losing six figures of monthly income overnight forced David to rebuild from zero53:49 – Learn why David believes trust is more valuable than money when building an unstoppable business About the Guest: David Price – CEO & Founder, The Price Group IMO David Price is the visionary CEO and Founder of The Price Group IMO, one of the fastest-rising organizations in financial services. His journey to success was anything but ordinary. Growing up in a broken home and battling drug and alcohol addiction for years, David hit rock bottom more than once. In 2013, he made the life-changing decision to get clean and rebuild his life. That moment of clarity became the foundation for everything that followed, teaching him resilience, grit, and an unshakable drive to create a better future. In 2018, David discovered the insurance industry. With no prior experience, he earned his license and built a simple, scalable system that allowed everyday people—single moms, career changers, and those just looking for a side income—to succeed. Within 36 months, he became a millionaire, and by his fourth year he was generating more than $1 million annually. In October 2024, he launched The Price Group IMO, partnering with top carriers and introducing a superior lead program that created even greater opportunities for people to work from home and build real financial freedom. In less than 350 days, the organization produced over $10 million in sales, cementing itself as one of the fastest-growing IMOs in the country. Today, David's mission extends far beyond personal success. He is dedicated to helping people reinvent their lives, showing them how to earn an income, work flexibly from home, and build businesses of their own. Many of the agents and agencies he mentors are already on track to reach six and seven figures, proving the power of his model. Beyond business, David is a member of the Forbes Business Council and an active voice on Instagram, Facebook, LinkedIn, Twitter, and YouTube, where he shares transparent insights, strategies, and motivation for people seeking more freedom, flexibility, and purpose in their careers. Ways to connect with David**:**
MTRS meet Moa Anbessa intervistaa a Buri, Prince David e Imo Per diffondere questa puntata: https://www.radiotandem.it/mountain-top-reggae-station-del-6-febbraio-2026 Tutti i podcast di Mountain Top Reggae Station: https://www.radiotandem.it/mountain-top-reggae-station
Today on the Chris and Amy Show; Grace Ybarra, Sports Reporter and Anchor for KMOV joins to talk about the Blues struggles, Cardinals make a big trade, Billikens are hot and more. CBS Chief Washington Correspondent Major Garrett joins to talk about layoffs at the Washington Post and people not trusting media. Dr. F Perry Wilson, Director of the Clinical and Translational Research Accelerator at Yale University joins to talk about dry January, GLP-1s and more. Dutch Gudici, President and CEO of Imo's Pizza joins to talk about the new limited time "Honey Trap" pizza. In the final hour Chris and Amy are joined by Dr. F Perry Wilson, Director of the Clinical and Translational Research Accelerator at Yale University to talk about dry January, GLP-1s and more. Dutch Gudici, President and CEO of Imo's Pizza joins to talk about the new limited time "Honey Trap" pizza. Plus combining friend groups.
In the final hour Chris and Amy are joined by Dr. F Perry Wilson, Director of the Clinical and Translational Research Accelerator at Yale University to talk about dry January, GLP-1s and more. Dutch Gudici, President and CEO of Imo's Pizza joins to talk about the new limited time "Honey Trap" pizza. Plus combining friend groups.
Dutch Guidici, President and CEO of Imo's Pizza, joins the show to discuss the pizza chain's history in St. Louis and to promote the new limited time "Honey Trap" pizza.
The thought came to me to outline what the Game of Life and Reality really looks like, IMO. In this episode, I'm going to outline four important focus points for playing the game of life in an effective manner and using your energy as wisely as possible. Definitely don't take my word for it-- do your own research. Take what resonates and leave the rest for others! If you like what I do, go follow me on all my socials.
Two-time Olympic gymnast Aly Raisman joins IMO today to reflect on her incredible career as a gymnast and life after the sport. Aly shares the moment she knew she wanted to go to the Olympics, how her parents navigated her intense childhood training regimen, and how she is adjusting to life after the sport that profoundly changed her life. Plus, Michelle reveals the sport she always dreamed of joining.Have a question you want answered? Write to us at imopod.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
From Astro Boy to Gundam to real-world robots like ASIMO and Pepper, Japan's fascination with robots runs deep. This week, the Krewe is joined by author, cultural commentator, & robot enthusiast Matt Alt to explore how robots became heroes instead of threats in Japanese pop culture and how those sci-fi dreams quietly shaped Japan's modern relationship with technology, AI, and everyday automation. From giant mecha and cyborg icons to robot cafés and beyond, we dig into why Japan seems so comfortable living alongside machines in an episode that's equal parts nostalgia, culture, and future tech.------ About the Krewe ------The Krewe of Japan Podcast is a weekly episodic podcast sponsored by the Japan Society of New Orleans. Check them out every Friday afternoon around noon CST on Apple, Google, Spotify, Amazon, Stitcher, or wherever you get your podcasts. Want to share your experiences with the Krewe? Or perhaps you have ideas for episodes, feedback, comments, or questions? Let the Krewe know by e-mail at kreweofjapanpodcast@gmail.com or on social media (Twitter: @kreweofjapan, Instagram: @kreweofjapanpodcast, Facebook: Krewe of Japan Podcast Page, TikTok: @kreweofjapanpodcast, LinkedIn: Krewe of Japan LinkedIn Page, Blue Sky Social: @kreweofjapan.bsky.social, Threads: @kreweofjapanpodcast & the Krewe of Japan Youtube Channel). Until next time, enjoy!------ Support the Krewe! Offer Links for Affiliates ------Use the referral links below & our promo code from the episode!Support your favorite NFL Team AND podcast! Shop NFLShop to gear up for football season!Zencastr Offer Link - Use my special link to save 30% off your 1st month of any Zencastr paid plan! ------ Matt Alt Links ------Matt's WebsitePure Invention - Publisher's PageMatt's NewsletterPure Tokyoscope PodcastMatt on IG------ Past Matt Alt Episodes ------Akira Toriyama: Legacy of a Legend ft. Matt Alt (S5E3)The History of Nintendo ft. Matt Alt (S4E18)How Marvel Comics Changed Tokusatsu & Japan Forever ft Gene & Ted Pelc (Guest Host, Matt Alt) (S3E13)Yokai: The Hauntings of Japan ft. Hiroko Yoda & Matt Alt (S2E5)Why Japan ft. Matt Alt (S1E1)------ Past KOJ Pop Culture Episodes ------Enjoying Shojo Anime & Manga ft. Taryn of Manga Lela (S5E18)The History & Evolution of Godzilla ft. Dr. William (Bill) Tsutsui (S5E1)Thoughts on Godzilla Minus One ft. Dr. William (Bill) Tsutsui (S4Bonus)Japanese Mascot Mania ft. Chris Carlier of Mondo Mascots (S4E8)Tokusatsu Talk with a Super Sentai ft. Sotaro Yasuda aka GekiChopper (S4E6)The Evolution of PokéMania ft Daniel Dockery [Part 2] (S4E3)The Evolution of PokéMania ft Daniel Dockery [Part 1] (S4E2)Japanese Independent Film Industry ft. Award Winning Director Eiji Uchida (S3E18)Talking Shonen Anime Series ft. Kyle Hebert (S3E10)Japanese Arcades (S2E16)How to Watch Anime: Subbed vs. Dubbed ft. Dan Woren (S2E9)Manga: Literature & An Art Form ft. Danica Davidson (S2E3)The Fantastical World of Studio Ghibli ft. Steve Alpert (S2E1)The Greatest Anime of All Time Pt. 3: Modern Day Anime (2010's-Present) (S1E18)The Greatest Anime of All Time Pt. 2: The Golden Age (1990's-2010's) (S1E16)The Greatest Anime of All Time Pt. 1: Nostalgia (60's-80's) (S1E5)We Love Pokemon: Celebrating 25 Years (S1E3)------ JSNO Upcoming Events ------JSNO Event CalendarJoin JSNO Today!
Red Sea hokey-kokey, the top risk for shipping businesses in 2026, and a booming dark fleet market.These are just some of the stories that are covered in the latest episode of Maritime in Minutes.Seatrade Maritime News' Marcus Hand and Gary Howard reflect on the month of January, with their highlights from the news in maritime and shipping, from the biggest stories to those that simply piqued their interest.Hear more about:IMO rules understate benefits of utilising captured carbon, says GCMDSingapore report highlights crew fatigue in Hafnia Nile and Ceres I collisionShipping sees regulatory changes as biggest risk to business in 2026Asian shipping execs negative on IMO Net Zero Framework prospectsCMA CGM reverts services to Cape of Good HopeTwo dead, 15 rescued from capsized bulker off PhilippinesRepeat orders mark wind propulsion success storiesDark fleet ship-to-ship transfers off Malaysia more than doubleIf you enjoyed this episode, please subscribe to ensure you don't miss our latest uploads. For the latest news on the shipping and maritime industries, visit www.searade-maritime.com Connect with Marcus Hand:Follow on Twitter: https://twitter.com/marcushand1 Follow on LinkedIn: https://www.linkedin.com/in/marcus-hand-b00a317/Connect with Gary Howard:Follow on Twitter: https://twitter.com/GaryLeeHoward Follow on LinkedIn: https://www.linkedin.com/in/garyleehoward/Don't forget to join the conversation and let us know what topics you want us to cover in future on Twitter, Facebook or LinkedIn
It was a pleasure to speak with Rod for the first time in my career. He celebrated his 50th year as a professional drummer this past September. We discuss that and his latest project, Voices of Extreme. Before we got to those topics, Rod shared some great stories of touring with Rush and the legendary Neil Peart. He spoke about what an average day was like on that tour, his personality, and more. I also gave Rod a chance to plug his Wing Thing drum tool. From what I've heard, many touring drummers and drum techs use this tool. Was Winger underrated as far as their musicianship? He answers that question, as well as a little behind-the-scenes into the making of their hit, Seventeen. IMO, their record PULL was their best record. Unfortunately, a thing called grunge hit, and that album never got its due. It was a pleasure to speak with Rod....I hope you enjoy this. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
On this episode of IMO, musician Jon Batiste and writer Suleika Jaouad join the podcast! They tell Michelle and Craig about the surprising place where they first met, their creative processes, and how they have supported one another through illness. Have a question you want answered? Write to us at imopod.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
BE WARNED: It's LuAnna, and this podcast contains honest, upfront opinions, rants, bants and general explicit content. But you know you love it.On this week's LuAnna: Anna's back on Celebs Go Dating, Lu's missing her little Crumble and is getting into her dog mum era, Anna can't do Year 4 maths, Imo's defending cryptic crosswords and Anna's trying to make 'binfluencing' a thing.Plus, cuddling your boyfriend's dad's boxers, Steven Bartlett's in hot water, the Brooklyn Beckham debacle, we announce weirdo of the MONTH and Lu's ranting about fridge etiquette.GRAB YOUR TICKETS FOR THE BIG PARTY AT EVERYTHINGLUANNA.COMRemember, if you want to get in touch you can: Email us at luanna@everythingluanna.com OR drop us a WhatsApp on our brand new number 075 215 64640Please review Global's Privacy Policy: https://global.com/legal/privacy-policy/
A 6 week course for people with SIBO (small intestinal bacterial overgrowth) or IMO (intestinal methanogen overgrowth) with a nutritional therapist.SIBO Strategy Sessions Starting Feb 16th 2026 https://www.goodnessme-nutrition.com/sibo-strategy-sessions/ ✅ Weekly SIBO clinic calls. Recorded to watch back on demand.✅ Private community to support your stage of SIBO treatment✅ Q&A space each week in the community and on calls✅ Supplement guides to support your treatment✅ Easy to understand lessons about SIBO diets, testing and symptomsDetailsStarts: 16th February to 27th MarchFormat: Weekly Zoom sessions, private group chat, resources to keepFor: People with SIBO / IMO who are stuck and need helpCost: £225 (see below for VIP option with 1:1 session)EARLY BIRD Price - £150 until 31st Jan 2026
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe
On this week's IMO, Regina and Reina King join the show to talk about their bond as sisters. The two share what it's like to be Los Angeles born and bred, their early forays into child acting, and how they have been grieving the loss of Regina's son, Ian, who died in early 2022.Have a question you want answered? Write to us at imopod.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
On this week's Vogue & Amber: A pod in three parts as Vogue's internet tries its hardest to stop us. House move chaos, Amber's about to take on contemporary ballroom, Irish chocolate cravings and a hobby/resolution chat. Plus, Imo drops a bombshell, a new cutlery nightmare, a friendship dilemma, and some good news. Watch us on Youtube! CLICK HERE! or search Vogue & AmberRemember, if you want to get involved you can:Email us at vogueandamberpod@global.com OR find us on socials @voguewilliams, @ambrerosolero @vogueandamberpodListen and subscribe to Vogue & Amber on Global Player or wherever you get your podcasts.
Before fully diving into 2026, the Krewe takes a minute (or 64) to reflect on Japan in 2025, recapping & remembering the good, the bad & the wacky. From the top news stories of 2025 to the year's biggest pop culture stand outs, this episode covers it all!------ About the Krewe ------The Krewe of Japan Podcast is a weekly episodic podcast sponsored by the Japan Society of New Orleans. Check them out every Friday afternoon around noon CST on Apple, Google, Spotify, Amazon, Stitcher, or wherever you get your podcasts. Want to share your experiences with the Krewe? Or perhaps you have ideas for episodes, feedback, comments, or questions? Let the Krewe know by e-mail at kreweofjapanpodcast@gmail.com or on social media (Twitter: @kreweofjapan, Instagram: @kreweofjapanpodcast, Facebook: Krewe of Japan Podcast Page, TikTok: @kreweofjapanpodcast, LinkedIn: Krewe of Japan LinkedIn Page, Blue Sky Social: @kreweofjapan.bsky.social, Threads: @kreweofjapanpodcast & the Krewe of Japan Youtube Channel). Until next time, enjoy!------ Support the Krewe! Offer Links for Affiliates ------Use the referral links below & our promo code from the episode!Support your favorite NFL Team AND podcast! Shop NFLShop to gear up for football season!Zencastr Offer Link - Use my special link to save 30% off your 1st month of any Zencastr paid plan! Get your very own JAPAN BEAR SHELTER------ Past KOJ Episodes Referenced ------Crash Course in Japanese Politics ft. Tobias Harris of Japan Foresight (S6E13)Social Media & Perceptions of Japan (S6E8)Japanese Soccer on the World Stage ft. Dan Orlowitz (S6E5)Meet the J.League ft. Dan Orlowitz (S6E4)Expo 2025: Japan on the World Stage ft. Sachiko Yoshimura (S6E2)Checking Out Miyagi ft. Ryotaro Sakurai (Guest Host, William Woods) (S5E5)Thoughts on Godzilla Minus One ft. Dr. William (Bill) Tsutsui (S4Bonus)Visiting Themed Cafes in Japan ft. Chris Nilghe of TDR Explorer (S4E15)The Life of a Sumotori ft. 3-Time Grand Champion Konishiki Yasokichi (S4E10)Japan 2021: A Year in Review (S2E13)Japanese Theme Parks ft. TDR Explorer (S2E4)Greatest Anime of All-Time pt. 3: Modern Day Anime (2010-Present) (S1E18)Talking Sumo ft. Andrew Freud (S1E8)------ JSNO Upcoming Events ------JSNO Event CalendarJoin JSNO Today!
What is PCOS, how does it overlap with IBS, and what can you do about it? Join me and our guest Cory Ruth as we break down all of the above and more! Cory Ruth is a registered dietitian nutritionist and women's health expert who specializes in PCOS and nutrition therapy for infertility and assisted reproductive technology. She is the founder and principal of The Women's Dietitian. PCOS Is My Power: The first complete guide to thriving with Polycystic Ovary Syndrome (PCOS), offering a science-backed, holistic path to managing symptoms, plus 68 recipes and 6 meal plans. In this episode, we cover: Meet Cory 3:13 What is PCOS? 4:17 What have you been focused on? 7:01 Why does it take so long to get a diagnosis? 8:59 IBS + PCOS overlap 12:17 Inflammation 14:30 Treating PCOS 20:26 GLP-1s 23:27 How do diet and lifestyle modifications help? 25:15 Biggest myths 30:02 PCOS is my power 32:45 Connect with Cory 36:58 Thank you to our partners: mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
Daniel has worked in the insurance and financial services business since 1995. He is the founder of Perpetual Wealth Management, LLC and the Perpetual Wealth System for Premium financing transactions. Daniel's business has been focused on Premium Financing for the last 15 years. Daniel is a National Vendor for Premium Finance Strategies, representing multiple life insurance carriers and finance lenders. Perpetual Wealth Management has funded over $2 Billion of Death Benefit and has over $750 million of funded and/or committed capital loans outstanding, with multiple finance lenders. Daniel works around the country with IMO's, agents and HNW clients to implement these concepts. His main focuses entail Estate & Charitable Planning – Business Planning and Supplemental Income Planning. He has spoken at many industry events on the topic of Premium Financing. Daniel works and lives in Chicago with his wife, Anna Marie, and has three children, Isabelle, Alexandra, and Andrew.If you are interested in learning more about Premium Financing and how these concepts can be implemented in your practice or financial plan, please book a no obligation 30-minute conversation with me.Learn more: http://www.perpetualwm.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-daniel-wachs-with-perpetual-wealth-management-navigating-the-complexities-and-fears-of-premium-finance
Daniel has worked in the insurance and financial services business since 1995. He is the founder of Perpetual Wealth Management, LLC and the Perpetual Wealth System for Premium financing transactions. Daniel's business has been focused on Premium Financing for the last 15 years. Daniel is a National Vendor for Premium Finance Strategies, representing multiple life insurance carriers and finance lenders. Perpetual Wealth Management has funded over $2 Billion of Death Benefit and has over $750 million of funded and/or committed capital loans outstanding, with multiple finance lenders. Daniel works around the country with IMO's, agents and HNW clients to implement these concepts. His main focuses entail Estate & Charitable Planning – Business Planning and Supplemental Income Planning. He has spoken at many industry events on the topic of Premium Financing. Daniel works and lives in Chicago with his wife, Anna Marie, and has three children, Isabelle, Alexandra, and Andrew.If you are interested in learning more about Premium Financing and how these concepts can be implemented in your practice or financial plan, please book a no obligation 30-minute conversation with me.Learn more: http://www.perpetualwm.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-daniel-wachs-with-perpetual-wealth-management-strategic-use-of-leverage-in-retirement-planning
OIP's long-term care planning framework is a simple four-step process: Educate → Discover → Present Solutions → Execute. Scott and Bill emphasize that LTC (or “elder care/aging with dignity”) planning works best when it starts early and stays plan-first, not product-first—otherwise the conversation turns into a transaction. They also stress that “self-insuring” is still a plan, but many clients underestimate the non-financial consequences (family burden, caregiver strain, and messy dynamics) that show up when there's no proactive strategy.Evan explains how Waterlily makes these conversations easier and more personalized: a short client intake generates an individualized care timeline (likelihood, timing, duration, care hours, and zip-code-based costs) and helps clients visualize tradeoffs like family caregiving vs. paid professional care. The platform can also model funding approaches by importing policy PDFs (or illustrations) to simulate how benefits would actually pay during a client's predicted claim scenario, and it supports quoting/application workflows to reduce friction and improve execution. **This is the Optimized Advisor Podcast, where we focus on optimizing the wellbeing and best practices of insurance and financial professionals. Our objective is to help you optimize your life, optimize your profession, and learn from other optimized advisors. If you have questions or would like to be a featured guest, email us at optimizedadvisor@optimizedins.com Optimized Insurance Planning
Daniel has worked in the insurance and financial services business since 1995. He is the founder of Perpetual Wealth Management, LLC and the Perpetual Wealth System for Premium financing transactions. Daniel's business has been focused on Premium Financing for the last 15 years. Daniel is a National Vendor for Premium Finance Strategies, representing multiple life insurance carriers and finance lenders. Perpetual Wealth Management has funded over $2 Billion of Death Benefit and has over $750 million of funded and/or committed capital loans outstanding, with multiple finance lenders. Daniel works around the country with IMO's, agents and HNW clients to implement these concepts. His main focuses entail Estate & Charitable Planning – Business Planning and Supplemental Income Planning. He has spoken at many industry events on the topic of Premium Financing. Daniel works and lives in Chicago with his wife, Anna Marie, and has three children, Isabelle, Alexandra, and Andrew.If you are interested in learning more about Premium Financing and how these concepts can be implemented in your practice or financial plan, please book a no obligation 30-minute conversation with me.Learn more: http://www.perpetualwm.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-daniel-wachs-with-perpetual-wealth-management-strategic-use-of-leverage-in-retirement-planning
Daniel has worked in the insurance and financial services business since 1995. He is the founder of Perpetual Wealth Management, LLC and the Perpetual Wealth System for Premium financing transactions. Daniel's business has been focused on Premium Financing for the last 15 years. Daniel is a National Vendor for Premium Finance Strategies, representing multiple life insurance carriers and finance lenders. Perpetual Wealth Management has funded over $2 Billion of Death Benefit and has over $750 million of funded and/or committed capital loans outstanding, with multiple finance lenders. Daniel works around the country with IMO's, agents and HNW clients to implement these concepts. His main focuses entail Estate & Charitable Planning – Business Planning and Supplemental Income Planning. He has spoken at many industry events on the topic of Premium Financing. Daniel works and lives in Chicago with his wife, Anna Marie, and has three children, Isabelle, Alexandra, and Andrew.If you are interested in learning more about Premium Financing and how these concepts can be implemented in your practice or financial plan, please book a no obligation 30-minute conversation with me.Learn more: http://www.perpetualwm.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-daniel-wachs-with-perpetual-wealth-management-navigating-the-complexities-and-fears-of-premium-finance
In this episode, we speak with Vera Alexandropoulou (Vice President at Thalassa Foundation) and Guy Aufenacker (Head Private Banking Maritime at Bergos) about the pressing challenges of the shipping industry in the face of climate goals and energy transition. From the Paris 2050 Agreement and the IMO's Net Zero Framework to the technological gap between land and sea – we explore how shipping must adapt, and why private initiatives like the Thalassa Foundation are becoming increasingly important.DISCLAIMER This publication is for information- and marketing purposes only. The provided information is not legally binding and neither constitutes a financial analysis, nor an offer for investment-transactions or an investment advice and does not substitute any legal, tax or financial advice. Bergos AG does not accept any liability for the accuracy, correctness or completeness of the information. Bergos AG excludes any liability for the realisation of forecasts or other statements contained in the publication. The reproduction in part or in full without prior written permission of Bergos is not permitted.
Offshore units are not conventional ships, and recycling them safely requires a different level of planning, structural scrutiny, and environmental control. In this episode of Beyond the Last Voyage, Jamie Dalzell, Head of GMS Singapore, is joined again by Capt. Yogesh Rehani, Head of Operations at GMS, to discuss how GMS manages the last voyage of FPSOs, FSOs, semi submersibles, and jack up rigs. Capt. Yogesh explains why these assets are more complex to prepare for recycling, including production systems, piping, residues, and the structural risks that come with age, steel wastage, and exposed topside equipment. The conversation covers towage readiness, flare mast protection, stability concerns, and the importance of emergency measures during ocean passages. The episode also revisits landmark operations that helped reshape industry confidence, including the long distance tow of a semi submersible from Brazil and the successful delivery and beaching of a jack up rig in Chittagong, supported by rigorous inspection, daily monitoring, and alignment with marine warranty surveyor requirements. Safety and environmental responsibility remain central throughout. Capt. Yogesh describes how GMS follows MARPOL and IMO expectations for residue control and documentation, and why cutting corners is never an option when lives, reputation, and environmental integrity are at stake.
Daniel has worked in the insurance and financial services business since 1995. He is the founder of Perpetual Wealth Management, LLC, and the Perpetual Wealth System for Premium financing transactions. Daniel's business has been focused on Premium Financing for the last 15 years. Daniel is a National Vendor for Premium Finance Strategies, representing multiple life insurance carriers and finance lenders. Perpetual Wealth Management has funded over $2 Billion of Death Benefit and has over $750 million of funded and/or committed capital loans outstanding, with multiple finance lenders. Daniel works around the country with IMO's, agents, and HNW clients to implement these concepts. His main focuses entail Estate & Charitable Planning, Business Planning, and Supplemental Income Planning. He has spoken at many industry events on the topic of Premium Financing. Daniel works and lives in Chicago with his wife, Anna Marie, and has three children, Isabelle, Alexandra, and Andrew.If you are interested in learning more about Premium Financing and how these concepts can be implemented in your practice or financial plan, please book a no-obligation 30-minute conversation with me.Learn more: http://www.perpetualwm.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-daniel-wachs-with-perpetual-wealth-management-leveraging-premium-finance
Daniel has worked in the insurance and financial services business since 1995. He is the founder of Perpetual Wealth Management, LLC, and the Perpetual Wealth System for Premium financing transactions. Daniel's business has been focused on Premium Financing for the last 15 years. Daniel is a National Vendor for Premium Finance Strategies, representing multiple life insurance carriers and finance lenders. Perpetual Wealth Management has funded over $2 Billion of Death Benefit and has over $750 million of funded and/or committed capital loans outstanding, with multiple finance lenders. Daniel works around the country with IMO's, agents, and HNW clients to implement these concepts. His main focuses entail Estate & Charitable Planning, Business Planning, and Supplemental Income Planning. He has spoken at many industry events on the topic of Premium Financing. Daniel works and lives in Chicago with his wife, Anna Marie, and has three children, Isabelle, Alexandra, and Andrew.If you are interested in learning more about Premium Financing and how these concepts can be implemented in your practice or financial plan, please book a no-obligation 30-minute conversation with me.Learn more: http://www.perpetualwm.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-daniel-wachs-with-perpetual-wealth-management-leveraging-premium-finance
https://www.drmarysanders.com/meditations✨ Download Your FREE Ultimate Meditation Series — three powerful guided meditations designed to calm your nervous system, strengthen intuition, and support deep gut–brain healing.Where Energy Gets Stuck: The Gut, the Nervous System, and the Root of True Healing | Jen Yundt Coles, Health CoachIn this episode, we dive into the profound connection between gut health, nervous system regulation, and the emotional roots of chronic digestive issues. If you're a woman who has “tried everything” — diets, protocols, supplements — and still feels bloated, fatigued, inflamed, or misunderstood, this conversation is for you.I'm joined by Jen Yundt Coles, Functional & Integrative Health Coach and SIBO specialist, whose own decade-long journey through food sensitivities, abdominal pain, brain fog, and hormonal burnout led her to a deeper truth: real healing is impossible without nervous system alignment and energetic sovereignty.Together, we explore: ✨ Why SIBO, IMO, and hydrogen sulfide issues are more than digestion ✨ How chronic stress and trauma shape the gut-brain connection ✨ Why spiritually attuned women are so often dismissed or misdiagnosed ✨ How reconnecting with the enteric nervous system restores clarity, creativity, and personal powerThis is where physiology meets intuition — and where your healing truly begins.Connect with Today's Guest: Jen Yundt ColesInstagram: https://www.instagram.com/the.sibo.coach/Facebook: https://www.facebook.com/thesibocoach/LinkedIn: https://www.linkedin.com/in/jenyundtcoles-thesibocoach
This week on IMO, beloved actress and comedy legend Carol Burnett joins Michelle and Craig for a heartfelt conversation about growing up in Hollywood, manifesting her education and career, and the long-lasting impact of The Carol Burnett Show. Plus, the group offers advice to a listener dealing with imposter syndrome. Stay tuned until the very end for a touching moment between Carol, Craig, and Michelle. Have a question you want answered? Write to us at imopod.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Looking for diet strategies that improve IBS symptoms and are backed by research, not trends? In this episode, we cover: Can you give me a diet plan? [4:07] Low FODMAP diet [5:13] Long term [8:32] Fiber [12:19] Is all fiber the same? [14:29] How much fiber? [18:02] Is there "bad" food to avoid? [23:17] What are ultra processed foods? [25:44] What should your diet look like? [27:43] Where to start? [34:25] Thank you to our partners: ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer.
We’re kicking off a two-part series ranking the 30 largest public companies in Canada based purely on business quality. In Part 2, we tackle the last 15 names (and 2 bonus names) covering everything from banks and pipelines to precious metals—sharing our perspective on which business models stand the test of time and which ones carry more hidden risk. Tickers of stocks discussed: SHOP.TO, FNV.TO, NA.TO, DOL.TO, RY.TO, BN.TO, TRI.TO, WPM.TO, GIB.A.TO, CP.TO, ATD.TO, L.TO, IFC.TO, ENB.TO, BMO.TO, AEM.TO, CNQ.TO, CNR.TO, MFC.TO, TRP.TO, GWO.TO, CLS.TO, FFH.TO, SU.TO, IMO.TO, SLF.TO, ABX.TO, CM.TO, TD.TO, K.TO Watch the full video on Our New Youtube Channel! Check out our portfolio by going to Jointci.com Our Website Canadian Investor Podcast Network Twitter: @cdn_investing Simon’s twitter: @Fiat_Iceberg Braden’s twitter: @BradoCapital Dan’s Twitter: @stocktrades_ca Want to learn more about Real Estate Investing? Check out the Canadian Real Estate Investor Podcast! Apple Podcast - The Canadian Real Estate Investor Spotify - The Canadian Real Estate Investor Web player - The Canadian Real Estate Investor Asset Allocation ETFs | BMO Global Asset Management Sign up for Fiscal.ai for free to get easy access to global stock coverage and powerful AI investing tools. Register for EQ Bank, the seamless digital banking experience with better rates and no nonsense.See omnystudio.com/listener for privacy information.
I'm pretty sure this will be a popular episode. It was such fun, so interesting, so thought-provoking and Professor Chris is a genius (IMO), with an amazing ability to connect, tell stories and make super-interesting research and science, podcast-friendly. Among other things, we spoke about the science of the paranormal (anomalistic psychology), psychic abilities, false memories, ghosts, haunted houses, magicians, mentalists and his book 'The Science of Weird Shit! So F**king good. Enjoy. **BIO: Chris French is a British Psychologist and Professor Emeritus at Goldsmiths, University of London, where he founded the Anomalistic Psychology Research Unit. He specialises in the psychology of paranormal beliefs and anomalous experiences - why people believe in ghosts, psychics, UFOs, astrology, and other weird and wonderful claims.See omnystudio.com/listener for privacy information.
Many years ago in 2012, myself and a small team of people along with Coffee Fest Trade Shows had the pleasure of developing and running a competition called "America's Best Coffee House". The first of it's kind team cafe competition that was, IMO, the most accurate to real cafe work competition ever made. To this day it remains unmatched in showcasing true-to-life barista skills that working baristas regularly engage in. We shipped and assembled on the show floor a fully functional coffee bar across the country 3x a year. (From POS, pour-over bar, back bar with sinks, front bar etc. the works!) Through a stringent application process including video submissions, written apps, and secret shopping, we invited cafe teams made up exclusively of 3 current baristas from that company to run this bar with a 10 min open shift, 30 min live bar serving actual customers from the show floor who also had weighted judging slips on their receipts, and a 10 min closing shift. Along with our own standard cafe menu they were required to make, they brought their own coffee, sig drinks, pour over kettles etc. We had 3 judges judging all aspects of the competition and the teams never knew when the drinks they made would be judge by us since secret shoppers in the queue were instructed to deliver a range of drinks at random. It was fantastic, complicated, effective, and expensive to produce (hence why we closed up shop in 2015) but the lessons we learned from the display of teamwork, cleanliness, communication, workflow, QC, hospitality, and more were incredible. In this episode I am going to reminisce back to those competitions a bit and talk about the lessons and insights that were drawn from those intense real-life competitions and how they apply directly to your cafe. Will we ever see this competition rise up once more? Maybe not. But we can all raise our own standards and each pursue being the "Best Coffee House" where we are for the people we serve and serve with. Be sure to click the link below for a video produced during that time to see a slice of what we did. (Shout out to Joshua Boyt, Jesse Harriott, Jessica Rice, Terry Ziniewicz, Ryan Soeder, Pete Licata, Danny Loeschen, Aaron O'neal, and the Coffee Fest director at the time, David Heilbrunn.) We discuss: Integrity Real life vs on stage How time finds you out Prepare yourself for the unexpected You never know ow who is judging Team work Having a plan but being flexible Trusting the standards Links: Video montage of America's Best Coffee House Related Episodes: 492: How to be The Best Coffee Shop 298 : A Trophy, or Atrophy? SHIFT BREAK! Every Customer is a Judge KEYS TO THE SHOP ALSO OFFERS 1:1 CONSULTING AND COACHING! If you are a cafe owner and want to work one on one with me to bring your shop to its next level and help bring you joy and freedom in the process then email chris@keystothshop.com or book a free call now: https://calendly.com/chrisdeferio/30min SPONOR The world loves plant based beverages and baristas love the Barista Series! www.pacificfoodservice.com
BE WARNED: It's LuAnna, and this podcast contains honest, upfront opinions, rants, bants and general explicit content. But you know you love it.On this week's LuAnna: We're slap bang into a new year with the same old Luannimo. Lu's in Dubes and has a new puppy, Anna's in la chapelle and grabbed a man's ball bag on the rapids in Centerparcs and Imo's gone all high brow on us and is now into cryptic crosswords. See? New Year, Old Us.Plus, the Kardashians Kontroversial Kristmas, chaos in Subway, advice for a mum with red flags flying around her daughter's boyfriend, a cruise hook up with a twist and an interesting period related would you rather.GRAB YOUR TICKETS FOR THE BIG PARTY AT EVERYTHINGLUANNA.COMRemember, if you want to get in touch you can: Email us at luanna@everythingluanna.com OR drop us a WhatsApp on 07745 266947Please review Global's Privacy Policy: https://global.com/legal/privacy-policy/
The Krewe sits down with Amy Hever, Executive Director of the MLB Players Trust, and Chris Capuano, former MLB pitcher & Chair of the Players Trust Board, to explore how MLB players give back through community-driven initiatives. Discover the mission of the MLB Players Trust, player-led philanthropy, & how baseball continues to bridge cultures between Japan & the United States through youth programs, education initiatives, & meaningful cross-cultural engagement beyond the field.------ About the Krewe ------The Krewe of Japan Podcast is a weekly episodic podcast sponsored by the Japan Society of New Orleans. Check them out every Friday afternoon around noon CST on Apple, Google, Spotify, Amazon, Stitcher, or wherever you get your podcasts. Want to share your experiences with the Krewe? Or perhaps you have ideas for episodes, feedback, comments, or questions? Let the Krewe know by e-mail at kreweofjapanpodcast@gmail.com or on social media (Twitter: @kreweofjapan, Instagram: @kreweofjapanpodcast, Facebook: Krewe of Japan Podcast Page, TikTok: @kreweofjapanpodcast, LinkedIn: Krewe of Japan LinkedIn Page, Blue Sky Social: @kreweofjapan.bsky.social, Threads: @kreweofjapanpodcast & the Krewe of Japan Youtube Channel). Until next time, enjoy!------ Support the Krewe! Offer Links for Affiliates ------Use the referral links below & our promo code from the episode!Support your favorite NFL Team AND podcast! Shop NFLShop to gear up for football season!Zencastr Offer Link - Use my special link to save 30% off your 1st month of any Zencastr paid plan! ------ About MLB Players Trust ------MLB Players Trust WebsitePlaymakers Classic Info & TicketsMLB Players Trust on IGMLB Players Trust on X/TwitterMLB Players Trust on LinkedInMLB Players Trust on Facebook------ Past KOJ Traditional Japan Episodes ------Japanese Soccer on the World Stage ft. Dan Orlowitz (S6E5)Meet the J.League ft. Dan Orlowitz (S6E4)Kendo: The Way of the Sword ft. Alexander Bennett, 7th Dan in Kendo (S4E16)The Life of a Sumotori ft. 3-Time Grand Champion Konishiki Yasokichi (S4E10)Talking Sumo ft. Andrew Freud (S1E8)------ JSNO Upcoming Events ------JSNO Event CalendarJoin JSNO Today!
The New Year often pushes extreme gut health goals...but many resolutions actually make symptoms worse. In this episode of The Gut Show, we break down common January mistakes, why drastic changes can backfire, and how to set realistic, supportive goals instead - especially if you have IBS. What to expect this season + coming soon [3:12] New year messaging [4:28] If you're tempted, try this [6:35] Extreme changes all at once [8:41] Increasing fiber [11:11] Fasting/calories [13:25] Inflammation, MCAS, Endometriosis [16:13] Taking advice from those without experience with your condition [17:36] How to choose goals for the new year [24:55] Map out the steps to reach those goals [27:34] Stress load [28:56] Make room to check in and adjust [32:05] For IBS specifically [35:08] How to achieve the best diet for IBS [37:42] Mentioned in this episode: FREE IBS Warrior Summit MASTER Method Membership Take the quiz: What's your poop personality? Thank you to our partners: FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, Brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
Men Respect Women Who Do THIS First Have you ever been on a date where the other person only talks about themselves? Or they are so focused on giving you the ‘right’ answers to your questions that they don’t ask any questions of you? Being present is a big sign of respect IMO. Showing an equal interest in the conversation and asking your own questions helps drive the conversation but also helps both parties determine if another date is worth pursuing. Let’s explore the DEEPER ways to be present in the dating world and how to determine if your guy is looking for a way to move on to the next person. Let’s talk about…Men Respect Women Who Do THIS First Resources: FREE Discovery Call ► http://jonathonaslay.com/coaching Join My VIP Group for $7– http://jonathonaslay.com/midlifelove Self-Love the Book: http://selflovethebook.com Recommended Books: http://jonathonaslay.com/jonathon-recommends The post Men Respect Women Who Do THIS First appeared first on Understand Men Now With Jonathon Aslay.
On today's episode of IMO, Michelle and Craig welcome none other than… The Fonz! Henry Winkler joins to discuss his lifetime journey with dyslexia, growing up in New York, going to therapy, and, yes, his most iconic roles. Plus, Henry shares his thoughts on parenting and being a grandfather.Have a question you want answered? Write to us at imopod.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, Michelle and Craig are joined by the one and only Jenifer Lewis. Jenifer discusses her lifelong experience with bipolar disorder, her early experiences on Broadway and in Hollywood, and her longtime experience with sex addiction. Plus, in an IMO first, Jenifer spends some time workshopping her unreleased one woman show. Note: this episode contains discussion of sexual abuse and violence. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Watch The X22 Report On Video No videos found (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:17532056201798502,size:[0, 0],id:"ld-9437-3289"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="https://cdn2.decide.dev/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");pt> Click On Picture To See Larger PictureThe [CB][DS] are trying to convince the world high electricity costs are coming from AI and Crypto mining, it is not, its coming from the green new scam. Gas prices are coming way down. The new system Trump is building is getting stronger and stronger. The [CB] will fight back against Trump’s tariff system. The [DS] is pushing back, they want war and they do not want the peace deal. Corruption is being exposed in Ukraine which is putting a lot of pressure on Zelensky, the EU is now funding Ukraine. Soon he will be pushed out or he will sign the peace deal. Trump says its time for election in Ukraine. The [DS] criminal syndicate that they setup in DC under threat by the SC. They will rule that Trump as the right to remove the agencies and people, they are not independent of the Executive Branch, game over. Economy https://twitter.com/MarioNawfal/status/1997946755116359938?s=20 thanks to bad energy policy, not data centers. He slammed subsidies for unreliable sources like offshore wind, saying some projects cost $11B for 1GW of intermittent power, versus $1–2B for 24/7 reliable supply. Burgum laid into what he called “climate extremists,” accusing them of prioritizing flashy green experiments over building energy systems that actually work. The result is sky-high bills for electricity that cuts out when the weather does, while lawmakers pat themselves on the back for feel-good “net zero” policies that don't add up. Burgum: “A lot of the higher prices that you’re seeing are not related to the AI data centers. The policy choices of the last 5 years, driven by sometimes climate extremists, were the ones that are driving up the prices you’re seeing.” (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:18510697282300316,size:[0, 0],id:"ld-8599-9832"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="https://cdn2.decide.dev/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs"); That is why I have authorized documentation to impose a 5% Tariff on Mexico if this water isn't released, IMMEDIATELY. The longer Mexico takes to release the water, the more our Farmers are hurt. Mexico has an obligation to FIX THIS NOW. Thank you for your attention to this matter! Gas Prices Drop To Lowest Level In Nearly 5 Years Across US Gasoline prices have dropped to their lowest levels in nearly five years and stand at around $2.90 per gallon on average as of Monday, according to data from GasBuddy, a company that tracks gas prices. “The national average has just slipped below $2.90 per gallon for the first time since May 2, 2021,” GasBuddy analyst Patrick De Haan wrote in a Sunday post on X. Source: zerohedge.com https://twitter.com/RapidResponse47/status/1998037849539846303?s=20 ADP Weekly Employment Report Signals Rebound In Labor Market the US labor market turned up for the four weeks ending Nov. 22, 2025, private employers added an average of 4,750 jobs a week., according to ADP’s new weekly employment data This week's positive number hints at an upswing in the labor market after four straight weeks of negative pulse estimates, after four straight weeks of losing jobs. This follows the almost unprecedented decline in initial jobless claims last week (which some have argued was impacted by Thanksgiving Week irregularities). Source: zerohedge.com https://twitter.com/profstonge/status/1998369537851346975?s=20 “degraded” products that nobody wanted, a terrible idea that slowed Innovation, and hurt the American Worker. That Era is OVER! We will protect National Security, create American Jobs, and keep America's lead in AI. NVIDIA's U.S. Customers are already moving forward with their incredible, highly advanced Blackwell chips, and soon, Rubin, neither of which are part of this deal. My Administration will always put America FIRST. The Department of Commerce is finalizing the details, and the same approach will apply to AMD, Intel, and other GREAT American Companies. MAKE AMERICA GREAT AGAIN! Political/Rights https://twitter.com/DHSgov/status/1998069235734520159?s=20 putting American lives at risk. There are another 4,015 aliens in the custody of an Illinois jurisdiction that ICE is seeking to arrest. Criminal illegal aliens should not be released back onto our streets to terrorize more innocent Americans. https://twitter.com/EricLDaugh/status/1998407499884511706?s=20 https://twitter.com/FBIDirectorKash/status/1998416601050161442?s=20 https://twitter.com/FBIDDBongino/status/1998135848546746381?s=20 daily to dismantle the network and all those criminal actors associated with it. https://twitter.com/EricLDaugh/status/1998400657217257829?s=20 DOGE https://twitter.com/EricLDaugh/status/1998127452195852468?s=20 don’t see how they can do that!” “I’ll speak about it later. I’ll get a FULL report on it.” “Europe has to be VERY careful…Europe is going in some BAD directions.” @ElonMusk will win this! Geopolitical https://twitter.com/PM_ViktorOrban/status/1998044051203928212?s=20 Hungary will not implement the measures of the Migration Pact. The rebellion begins! War/Peace https://twitter.com/Rasmussen_Poll/status/1998163342465306883?s=20 https://twitter.com/MarioNawfal/status/1998082649425125715?s=20 amid uncertainty about future U.S. involvement. Zelensky met with Macron, Merz, and Starmer to align Europe's position on Ukraine peace talks. The message? If the U.S. steps back, Europe is ready to step up. Macron spoke of “convergence” between Europe, Ukraine, and the U.S., code for: we're not waiting for Trump. Starmer promised “a just and lasting settlement.” Merz framed Ukraine's future as “the destiny of Europe.” This isn't just about Ukraine anymore, it's about Europe's ability to act without Washington.aa the subtext is clear: Europe knows Trump may walk away, and they're preparing for it. Ukraine is only part of the equation, the real test is whether Europe can act without Washington. For the first time since 2022, the center of gravity on Ukraine is shifting eastward, to Paris, Berlin, and London. If Trump wins, the burden of leadership falls on Europe. Today may have been the first test of whether it’s ready https://twitter.com/BRICSinfo/status/1998299398456131611?s=20 What’s The Likelihood Of A NATO-Russian Non-Aggression Pact? Putin recently proposed providing Europe, the majority of whose countries are part of NATO, with formal guarantees that it won't attack. In connection with this, he also assessed that those who fearmonger about Russia are serving the interests of the military-industrial complex and/or trying to bolster their domestic image, which exposed their ulterior motives. In any case, his proposal could hypothetically lead to a NATO-Russian Non-Aggression Pact (NRNAP), but only if the political will exists on both sides Source: zerohedge.com https://twitter.com/TheOtherSideRu/status/1998356606119981155?s=20 it's not a democracy anymore” https://twitter.com/visegrad24/status/1998356214384611652?s=20 hold an election, but I would think the Ukrainian people should have that choice. And maybe Zelensky would win. But they haven't had an election in a long time. They talk about a democracy, but it gets to a point where it's not a democracy anymore,” Donald Trump said. As of December 2025, Ukrainian President Volodymyr Zelenskyy’s approval (or trust) rating in Ukraine has reportedly plummeted due to a major corruption scandal involving leaked “Mindich tapes” tied to his inner circle and energy sector graft. Multiple sources, including Ukrainian media and lawmakers, indicate the rating has dropped by about 40 percentage points in a single week, now sitting at or below 20-25%. Medical/False Flags [DS] Agenda https://twitter.com/libsoftiktok/status/1998187351026348280?s=20 WATCH: Crockett Launches Senate Campaign By Posting Bizarre Compilation of Trump Repeatedly Calling Her ‘Low IQ' FBI Agents Sue Kash Patel After Being Fired Over BLM Support — Claim Kneeling ‘Saved American Lives' The FBI agents who kneeled during the George Floyd BLM riots were fired on Friday by the FBI. A group of former FBI agents has filed a lawsuit against Director Kash Patel and the federal government after being fired for supporting the Black Lives Matter movement. The dozen agents complained that almost immediately upon becoming director of the bureau, Patel began working to terminate all agents who had kneeled in support of the movement. The lawsuit also claims the agents would not have been fired had they had the same perceived political affiliations as those involved in the January 6th protests. Source: thegatewaypundit.com The FBI, as a U.S. federal law enforcement agency under the Department of Justice (DOJ), is required to maintain political neutrality and impartiality in its operations and public actions. It does not take official political stands or engage in activism, as its mission focuses on enforcing federal laws without partisan bias. Individual FBI employees (including agents) are subject to strict restrictions under the Hatch Act, which prohibits most forms of partisan political activity to ensure a neutral federal workforce. FBI personnel are classified as “further restricted” employees, meaning they face additional limitations compared to most other federal workers. Key Prohibitions for FBI EmployeesThese apply at all times (on or off duty) unless otherwise noted, with the goal of preventing any appearance of political influence or coercion: Taking a partisan political stand: They may not endorse or oppose candidates for partisan office or political parties in advertisements, broadcasts, campaign literature, speeches at partisan events, or similar materials if done in coordination with a candidate, party, or partisan group. Pushing partisan activism: Active participation in partisan political management or campaigns is banned, including organizing rallies/caucuses, promoting/selling tickets to fundraising events, addressing partisan gatherings in support of/opposition to candidates, or driving voters to polls in coordination with partisan entities. They cannot use their official authority to interfere with elections or solicit/discourage political activity from individuals with business before the DOJ/FBI. Permitted Activities for FBI EmployeesWhile heavily restricted, some non-active or non-partisan actions are allowed, primarily off-duty: . https://twitter.com/amuse/status/1998131089542713808?s=20 million in fees from Fani Willis's office after she was disqualified for an improper relationship with a special prosecutor. The Georgia Supreme Court removed her permanently in September, opening the door for all 19 defendants to file similar reimbursement claims. The total cost could dwarf Trump's alone and stands as a humiliating rebuke of Willis's partisan prosecution. The blowback is now financial as well as legal. https://twitter.com/MarioNawfal/status/1998354564790284308?s=20 notice. 18 of them are still actively covered. September 2025. Monthly payout: over $10,000. GAO’s just…monitoring them. Because apparently nobody at HHS has. No SSN? Fine. No proof of citizenship? Whatever. No income documentation? Come on in. GAO literally wrote in their report: “[We] did not provide documentation yet received coverage.” They’re not even hiding it – they got benefits with nothing. The system just said yes. Now check the real-world damage. In 2023, 29,000 Social Security numbers somehow got used for multiple full-year coverage plans. By 2024? That jumped to 68,000. Someone’s running the same number through the machine twice, three times, however many times it takes, and the alarms aren’t going off. Then there’s the $94 million that went to dead people in 2023. Not “accounts tied to people who died recently and the paperwork hasn’t caught up” – straight up deceased recipients. Death certificates filed, funerals held, checks still clearing. But here’s the really wild part: GAO tried to track $21 billion in subsidies from 2023 back to actual Social Security numbers. Couldn’t do it. 21 billion dollars just floating out there with no clear connection to who’s supposed to be getting it. The system allows multiple enrollments per SSN “to help ensure actual SSN-holder can enroll in cases of identity theft or data entry errors.” In other words: we built in workarounds so generous that fraud looks identical to legitimate use. Now Congress is fighting over whether to extend these enhanced COVID subsidies past December 31. Cost to keep them? $30 billion annually. 24 million people enrolled, over 90% getting subsidies. Without extension, premiums spike overnight and 22 million people might lose coverage. Republicans looking at GAO’s findings saying: this is exactly why we shouldn’t pour another $30B into a system that can’t tell fake accounts from real ones. Democrats saying: you’re going to kick 22 million people off insurance because less than 1% is fraud? Both sides kinda have a point. Yeah, the fraud’s under 1% of total enrollees. But when you’re burning $30B yearly and literally cannot verify where $21B went, “less than 1%” stops sounding so minor. Senate vote coming this week. Expected to fail. Which means scramble for short-term extension, fight continues into 2026 budget battles, and absolutely nothing changes about fraud controls. Because here’s what nobody wants to say out loud: the system isn’t designed to catch fraud. It’s designed to maximize enrollment. When your mandate is “get people covered,” asking too many questions becomes the enemy. Verification slows things down. Documentation creates barriers. Better to let a few fake accounts slip through than risk denying real people who need coverage. So GAO’s 18 fictional enrollees will keep collecting their $10K monthly until someone at HHS manually shuts them down. Which requires someone at HHS to actually read GAO reports. Which requires someone at HHS to care more about fraud than enrollment numbers. Don’t hold your breath. By next year, GAO will run the same test. Find the same results. Write the same warnings. And Congress will have the same fight about whether feeding money into a system that can’t track where it goes is compassionate policy or expensive theater. Meanwhile, somewhere in America, a completely imaginary person just got their subsidized premium renewed for 2026. https://twitter.com/chad_mizelle/status/1998194850324222006?s=20 clown show. Ignore him. In the meantime, Congress needs to start acting like a co-equal branch and initiate its own inquiry into Boasberg. President Trump's Plan Alina Habba Resigns as U.S. Attorney for New Jersey After Courts Rule Against Her Appointment Alina Habba, President Donald Trump's pick to serve as U.S. attorney for New Jersey, has resigned from her role following a federal court's ruling to uphold a lower court's decision that she was not “lawfully” appointed to the office. The news was announced Monday by U.S. Attorney General Pam Bondi, who said she was “saddened to accept Alina's resignation”: https://twitter.com/AGPamBondi/status/1998102734680318084?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1998102734680318084%7Ctwgr%5E61a3e334e8e6099ea26f7cf5005134be5bf746cd%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.breitbart.com%2Ft%2Fassets%2Fhtml%2Ftweet-5.html1998102734680318084 Habba intends to return to the U.S. attorney's office if that occurs, Bondi added, noting that she will be continuing with the DOJ as a senior advisor. Source: breitbart.com Do Not Mistake Compliance For Surrender” – Alina Habba Steps Down As Acting US Attorney For New Jersey Habba's statement Monday said “do not mistake compliance for surrender”. https://twitter.com/AlinaHabba/status/1998101999024550125?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1998101999024550125%7Ctwgr%5Ec3b83e0f57525961eabb9975a6e4dab69d0d73c0%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.zerohedge.com%2Fpolitical%2Fdo-not-mistake-compliance-surrender-alina-habba-steps-down-acting-us-attorney-new-jersey Source: zerohedge.com https://twitter.com/JoeLang51440671/status/1998202248636072142?s=20 Ketanji Brown Jackson claimed the president should have no power to fire expert bureaucrats. She said economists, PhDs, scientists, & transportation officials should operate beyond presidential reach. Such a view would carve the heart out of Article II & cement rule by permanent insiders rather than elected leadership. Jackson's theory elevates the deep state over the voters who choose a president. That is a constitutional revolution in plain sight. https://twitter.com/AwakenedOutlaw/status/1998116399190036973?s=20 Furthermore, the same logic would apply to the Federal Reserve, IMO. In fact, that’s almost certainly where this is going. Justice Kavanaugh: “I want to give you a chance to deal with the hard hypothetical. When both Houses of Congress and the President are controlled by the same party, they create a lot of these independent agencies or extend some of the current independent agencies into these kinds of situations so as to thwart future Presidents of the opposite party https://twitter.com/nayibbukele/status/1894547479367938142?s=20 https://twitter.com/Rothbard1776/status/1998162884455522528?s=20 https://twitter.com/MJTruthUltra/status/1998149963835191541?s=20 https://twitter.com/EricLDaugh/status/1998129151857848575?s=20 where you have Dem Senators, they won’t approve him! This gentlemen’s agreement [blue slip] has lasted TOO LONG. It means you can’t appoint a GOP US Attorney!” “In VA, NJ, CA, a US Attorney or judge…the only people you can get by are Democrats because they put a HOLD ON IT!” “It only takes one senator! If they are Democrat, they won’t approve it.” “All because GRASSLEY with his BLUE SLIP stuff won’t let anybody go by! And by the way, Democrats have violated blue slip!” Susie Wiles: Trump Will Campaign for 2026 Midterms ‘Like It's 2024 Again' White House Chief of Staff Susie Wiles revealed that President Donald Trump will get out and “campaign like it's 2024 again” for the 2026 midterm elections. Wiles went on to explain that “in the midterms, it's not about who's sitting at the White House,” but about localizing the election and keeping “the federal officials out of it.” “We're actually going to turn that on its head,” Wiles shared. “And, put him on the ballot because so many of those low propensity voters are Trump voters. And, we saw, a week ago Tuesday, what happens when he's not on the ballot and not active. So, I haven't quite broken it to him yet, but he's going to campaign like it's 2024 again.” Source: breitbart.com (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:13499335648425062,size:[0, 0],id:"ld-7164-1323"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.customads.co/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");