POPULARITY
Categories
Quick SummaryMarketing expert Melissa Dlugolecki shares her unconventional journey from celery juice educator to seven-figure agency owner, revealing why volume and brand consistency are the only strategies that matter in 2025. She opens up about transforming grief into purpose after losing her daughter, and why the lessons from that journey make entrepreneurs unstoppable.In This EpisodeWhy volume is non-negotiable in today's saturated market (and how to achieve it without burnout)The two-person brand persona exercise that instantly clarifies your positioningHow Melissa applies Tom Brady and Bill Belichick's mastery mindset to businessThe parallel between the grief journey and entrepreneurshipWhy "it's too saturated" is just an excuse hiding deeper fearsSystems and strategies for producing 60+ pieces of content daily across multiple clientsThe Kintsugi philosophy: filling your cracks with goldTactical tools from grief work that transform business resilienceKey TakeawaysVolume + Brand = Visibility: Success in 2025 requires showing up everywhere, consistently. Your "rent" is no longer a physical storefront—it's your online presence.Don't Take Anything Personally: Whether it's compliments or criticism, your worth isn't determined by others' opinions. This protects you from emotional rollercoaster decision-making.Mood Follows Action: Waiting to feel motivated means you'll never move forward. Commitment shifts energy, not the other way around.Your Brand Mitigates Risk: Consistency across all touchpoints (not just social media) creates the security buyers need to invest in you.Saturation is a Mindset Problem: The real issue isn't too many voices—it's unclear expectations and resistance to reality.Memorable Quotes"If you want freedom in your life, examine your expectations. Most unhappiness comes from subliminal expectations we never agreed upon.""It's a volume game. You have to be on demand when the buyer is ready to consume—not when you feel like posting.""Your brand is your rent in 2025. Just like brick-and-mortar businesses paid for storefronts, we pay through visibility.""Entrepreneurship is ego death after ego death. The post didn't perform well? That's your ego thinking everyone's watching.""Everyone is carrying a story we know nothing about. When we lead with that, we live more compassionately."Resources MentionedBook: The Four Agreements by Don Miguel RuizBook: Scar Tissue by Melissa Dlugolecki (available on Amazon and Kindle)Documentary: 30 for 30 series on Tom Brady and Bill BelichickPhilosophy: Kintsugi (Japanese art of repairing with gold)Concept: Chop Wood Carry WaterProject Management Tools: Monday.com, Trello, AsanaAbout the GuestMelissa Dlugolecki is a marketing strategist, agency owner, and author who helps entrepreneurs build powerful, cohesive brands. After growing a holistic health business to seven figures in 13 months, she pivoted to solve the marketing pain points she witnessed in her clients. Melissa's approach is informed by her background in psychology and sociology, her experience as a high school educator, and the profound grief journey following the loss of her daughter, Laden, in 2014. She ran the Boston Marathon five times in her daughter's memory and channels a unique blend of optimism and data-driven precision into everything she creates.ConnectMelissa's Instagram: @melissadluMelissa's Website: speakingofmelissa.comMelissa's Book: Scar Tissue (Amazon, Kindle)Kelsey's Website: KelseyReidl.comKelsey's Podcast: Rain or Shine (350+ episodes featuring Canadian entrepreneurs)Instagram/Social: @KelseyReidl
In this conversation, Dwayne Roberts discusses the five essential habits of Jesus that are crucial for spiritual growth, particularly for men. He emphasizes the importance of solitude, prayer, engagement with scripture, trust in God's love, and accountability. Each habit is explored in depth, highlighting how they shaped Jesus' identity and mission, and how they can be applied in contemporary life for personal development and spiritual alignment. Takeaways The habits of Jesus were deliberate disciplines. Solitude is essential for spiritual renewal. Prayer anchors our sense of purpose. Engagement with scripture shapes decision-making. Trusting in God's love alleviates anxiety. Accountability is vital for spiritual growth. Men often seek impact without intimacy. Jesus modeled clarity and discipline in his life. Spending time alone with God is transformative. These habits are crucial for following Jesus as disciples. Chapters 00:00 Introduction to the Five Habits of Jesus 02:25 The Importance of Solitude 05:26 Prayer as a Centering Practice 06:43 Saturation in Scripture 07:00 Trusting in God's Love Learn more Visit: https://www.dwaynehroberts.com/movinnercircle
Ce week-end, j'ai vécu un événement assez unique : près de 600 personnes réunies en Bretagne, à Grandchamp, pour les Rencontres Universelles.Et comme souvent, je ne vis pas ce type de moment “juste” comme un spectateur.Je le vis comme un laboratoire humain : j'observe, je ressens, je note… et je me demande :“Qu'est-ce que je peux en tirer pour grandir, et pour vous aider à grandir aussi ?”Dans cet épisode, je vous partage 6 leçons très concrètes, applicables immédiatement que vous soyez coach, entrepreneur, manager, ou simplement en train de construire un nouveau chapitre.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
WEBINAR LINK:https://shawnmoore.clickfunnels.com/optiniyvvg89sWant to learn more about Vodyssey or start your STR journey. Book a call here:https://meetings.hubspot.com/vodysseystrategysession/booknow?utm_source=vodysseycom&uuid=80fb7859-b8f4-40d1-a31d-15a5caa687b7FOLLOW US:https://www.facebook.com/share/g/16XJMvMbVo/https://www.instagram.com/vodysseyshawnmoorehttps://www.facebook.com/vodysseyshawnmoore/https://www.linkedin.com/company/str-financial-freedomhttps://www.tiktok.com/@vodysseyshawnmooreCONTACT US:support@vodyssey.comChapters00:00:00 Intro00:02:44 Debating the Value of Short-Term Rentals00:07:09 Housing Costs and Market Dynamics00:09:00 Interest Rates and Investment Decisions00:12:39 Risk vs. Reward in Real Estate00:19:42 Saturation in the Market00:25:40 Time Investment for Returns00:30:59 Effort Required in Short-Term Rentals00:35:50 Understanding Costs in Real Estate00:41:59 The Impact of One Property00:45:32 Choosing the Right Coach
In just five years, BROCKHAMPTON became a Hip-Hop comet, shooting by with seven albums and it could've been more! With those works, they went from free-wheeling Mixtape-sounding albums to more mature works lyrically and sonically. Its obvious that they couldn't have existed in previous eras but could it be done ever again?TIMESTAMPS:Weekly Music Roundup - (1:06)Ben:Don Toliver - OctaneLord Jah Monte Ogbon - As of NowLooking at Birds - Living RoomBy Storm x Injury Reserve - My Ghosts go GhostJordan Ward - BACKWARDWale the Sage - Hug Me As If We Were To Die TomorrowNafe Smallz - It's Not You It's MeThe Game x DJ Drama - Gangsta Grillz EMNTZu - Ferrum SidereumSkrillex - KoraLabrinth - COSMIC OPERA ACT ICharlie:Terrace Martin - PASSIONPJ - Why Do Feelings Matter AnywayLabrinth - COSMIC OPERA ACT ITeeZandos - STILL ODDNija - What I Didn't SayXV & MIKE SUMMERS - The Kid With The Green BackpackTopic Intro/Ben's Research House - (15:00)Saturation - (20:26)Saturation II - (26:45)Saturation III - (36:18)Iridescence - (46:00)Ginger - (53:54)Roadrunner: New Light, New Machine - (1:00:26)The Family - (1:05:41)TM - (1:09:48)Lighter Note - (1:18:34)Thanks for listening. Below are the Social accounts for all parties involved.Music - "Pizza And Video Games" by Bonus Points (Thanks to Chillhop Music for the right to use)HHBTN (Twitter & IG) - @HipHopNumbers5E (Twitter & IG) - @The5thElementUKChillHop (Twitter) - @ChillhopdotcomBonus Points (Twitter) - @BonusPoints92Other Podcasts Under The 5EPN:"What's Good?" W/ Charlie TaylorIn Search of SauceBlack Women Watch...5EPN RadioThe Beauty Of Independence
Broad Match - Danny and Adam break down Amazon's financial trajectory ahead of the Q4 2025 earnings call, exploring why Prime has effectively tapped out, where the retail business is heading, and why Rufus may be Amazon's most important bet for the future of e-commerce. Host: Danny McMillan Co-Host: Adam "Heist" Runquist Episode Summary With Amazon's Q4 2025 earnings call on the horizon, Adam digs into the historical financials of Amazon's retail business to understand where the company has been and where it is heading. The picture is clear: Prime membership has reached over 200 million Americans, covering roughly 75% of the adult population, and growth has slowed to just 3-4% annually. The remaining unsubscribed population is largely economically unfeasible to convert. The numbers tell a compelling story across Amazon's retail business units. First-party retail has matured and is effectively flat or declining. Third-party seller fees have grown 190% since 2019, far outpacing the 75% growth in Amazon's own retail — but sellers are now squeezed to single-digit net margins with little room for further extraction. Advertising remains the standout at 56 billion dollars in 2024 with 300% growth over five years, yet its long-term sustainability depends on healthy seller participation. This sets up what Adam describes as Amazon's innovators dilemma. Danny and Adam agree that Rufus represents Amazon's play to shift from a purchase destination to a product discovery and research platform, effectively competing with Google, YouTube, and Reddit for the consideration phase. The episode closes with a rallying call for sellers to focus on extreme efficiency, leveraging AI tools to optimise listings at a level of sophistication that was impossible even a year ago, and to prepare for a market where fewer sellers will survive but those who do will be significantly rewarded. Key Takeaways Amazon Prime has effectively saturated the US market at over 200 million members, with the remaining population largely economically unfeasible to convert, signalling the end of Amazon's biggest historical growth engine. Third-party seller fees have grown 190% since 2019 compared to 75% growth in Amazon's own retail, but sellers operating on single-digit margins means Amazon has limited room to extract further on a per-unit basis. Amazon's advertising business pulled in 56 billion dollars in 2024 with 300% five-year growth, but its future depends on whether enough healthy sellers remain to sustain ad spend. Rufus is positioned as Amazon's answer to the innovators dilemma — shifting from a purchase-only platform to a product discovery and research destination to drive more visits, higher conversion, and larger basket sizes. AI tools now allow sellers to accomplish listing optimisation work in hours that previously took weeks, making sophisticated conversion optimisation accessible to small teams without additional headcount. The market is entering a consolidation phase where fewer sellers will survive, but those who maintain cash reserves, optimise ruthlessly, and adapt to the changing landscape will benefit as competitors exit. Chapter Markers 00:00 - Introduction 00:40 - Why Amazon earnings matter for sellers 03:30 - Prime membership growth and saturation 06:22 - First-party retail maturity and decline 09:30 - Third-party seller fees hitting the ceiling 11:10 - Advertising as Amazon's growth engine 13:28 - Rufus and the discovery play 15:47 - The debate around Rufus and objectivity 19:07 - AI efficiency and listing optimisation 22:16 - Beyond keywords and single-dimension thinking 33:24 - Market consolidation and survival strategy 37:19 - Practical steps for sellers right now Resources Seller Sessions Website Seller Sessions YouTube Adam "Heist" Runquist on LinkedIn Adam Heist YouTube Channel ```
"Geoffrey, comment tu fais pour avoir de tels résultats en si peu de temps ?"C'est une question qui revient souvent. Depuis mon départ de la police en 2020, j'ai formé plus de 250 préparateurs mentaux et accompagné des centaines de leaders. La réalité, c'est que le marché de l'accompagnement explose, mais les exigences des clients aussi.Aujourd'hui, savoir "comment le cerveau fonctionne" ne suffit plus. À l'heure de l'IA, l'information est gratuite. Ce que vos clients achètent, c'est du résultat tangible, du sur-mesure et une incarnation hors norme.Dans cet épisode, je quitte le mode "théorie" pour vous livrer mes 6 protocoles de terrain, ceux qui m'ont permis de propulser l'Académie Puissance Mentale et de transformer des vies, de l'athlète de haut niveau au chef d'entreprise.
In Episode 79 of Geopolitics with Ghost, Ghost focuses on the growing gap between escalating rhetoric and the absence of decisive geopolitical action. The discussion centers on how information saturation, constant alerts, and emotionally charged reporting create the illusion of imminent global conflict while actual strategic moves remain limited or deliberately restrained. Ghost breaks down recent international signaling, media amplification, and the role of timing in shaping public perception, emphasizing that what is not happening often matters more than what is loudly announced. The episode examines how governments use ambiguity, delay, and narrative noise to manage pressure without triggering escalation, and why observers must resist reacting to every headline as a breaking turning point. Throughout the conversation, Ghost stresses patience, historical pattern recognition, and discipline in analysis, urging listeners to focus on structure, incentives, and long-term positioning rather than surface-level panic. Episode 79 continues the show's steady approach to geopolitics by prioritizing context, restraint, and strategic awareness over emotional interpretation.
"Geoffrey, fais de moi une machine."
[00:00] - Intro[01:43] - New Baby and 2026 Goals[05:44] - Service 2211 has been recorded, and will soon be online[07:48] - 2026: A Building Year for Watershape U[09:01] - Watershape U = Education + Reinforcement and Networking[12:10] - The new WU Service Track[13:33] - Upcoming WU Classes at Shows[20:20] - Closing[21:27] - Service 2211: Essential Water Chemistry - Unit A, Part 1[24:49] - Learning Outcomes and Introduction[27:13] - A.1 Hydrolysis, Saturation and Solubility[31:51] - A.2 Water Chemistry Ranges ______________________________Connect with us! Realize your full potential.Watershape University®Water chemistry questions?Orenda®Questions? Comments? Or apply to sponsor the show:ruleyourpool@gmail.com Facebook: @ruleyourpoolYouTube: @rule-your-pool
La Bourse va s'effondrer mercredi soir à cause de... Microsoft !!!C'est l'alerte rouge à Wall Street. Alors que tout le monde attendait que Microsoft sauve le secteur technologique, l'entreprise vient de lancer une annonce désespérée : la puce Maia 200. Est-ce un aveu d'échec face à Nvidia ?Dans cette vidéo, on analyse pourquoi mercredi soir pourrait être le point de bascule pour votre portefeuille :Le mur budgétaire : Pourquoi les 100 milliards de dollars de dépenses de Microsoft font trembler les investisseurs.Saturation totale : Azure est au bord de l'implosion physique (plus d'électricité, plus de serveurs).Le pivot de la dernière chance : Pourquoi ils ont dû abandonner Marvell en urgence pour Broadcom.L'effet domino : Si Microsoft déçoit sur ses prévisions (guidance), c'est tout le Nasdaq qui pourrait décrocher.Préparez-vous au choc de mercredi soir. On fait le point sur les risques réels que personne n'ose voir.Date d'enregistrement: 26 janvier 2026#bourse #Earnings #Microsoft #Nvidia #Investissement
Today's Friday Focus episode features a powerful excerpt from The 2026 Worship Conference, capturing a key moment from the opening night of the event. In this session, Dr. Greg Stiekes delivers a thoughtful and challenging message that calls believers to examine how the Word of Christ must shape and govern the life of the church as a corporate body. Drawing attention to the responsibility of the gathered people of God, Dr. Stiekes presses beyond individual devotion and highlights the necessity of shared submission to Scripture in our worship, ministry, and life together.This excerpt serves as a timely reminder that true worship is not driven by preference, personality, or performance, but by a collective commitment to conform every aspect of church life to the authority of Christ's Word. To learn more about The Worship Conference, including its purpose and upcoming events, or to listen to all of the conference sessions in their entirety, visit www.theworshipconference.org.Greg Stiekes, often known as Pastor Greg, has served as pastor of Gateway Baptist Church since April 2017, initially part-time while teaching at BJU Seminary and transitioning to full-time ministry in 2024 as the church grew. Raised in the Detroit area as the son of an independent Baptist pastor, he trusted Christ as a child and committed to preaching and teaching God's Word during high school. Greg holds degrees from Bob Jones University, Central Seminary, Erskine Theological Seminary, and Southeastern Baptist Theological Seminary, and has served in a variety of ministry roles including associate pastor, church planter, youth pastor, and senior pastor in Michigan, Wisconsin, Minnesota, and North Carolina, as well as on the faculties of Northland Baptist Bible College and BJU Seminary. His teaching focuses on New Testament studies, Greek exegesis, homiletics, apologetics, and biblical worship, and he remains active in writing and theological service. Greg and his wife Rena have five adult children and several grandchildren, and he joyfully shepherds the Gateway family with a desire to see Christ known, loved, and proclaimed.*Bio taken from www.gatewaytr.org
Dans cet épisode, je partage les 5 stratégies de préparation mentale que j'emmènerais avec moi sur n'importe quelle mission, quels que soient le contexte, la pression ou les enjeux.Pas de recettes miracles. Du vécu, du terrain, et des choix assumés.Je t'explique pourquoi, après près de 15 ans de pratique, ce ne sont pas les outils en eux-mêmes qui font la différence, mais le moment, l'intention et la manière de les utiliser.On parle régulation du système nerveux, confiance, lucidité, focus, adaptation… et de cette capacité à faire avancer la tête et le cœur ensemble.À travers des exemples concrets (athlètes, policiers, dirigeants, situations sous haute pression), je détaille ces stratégies.Je montre pourquoi la puissance mentale, ce n'est pas se contrôler, mais reprendre la main consciemment.Un épisode utile pour les professionnels de l'accompagnement, sportifs, coachs, dirigeants et toutes celles et ceux qui évoluent dans des contextes exigeants.Parce que la vraie performance commence quand on redevient le pilote de son système intérieur
Dans cet épisode, je partage les coulisses d'un accompagnement en préparation mentale auprès de jeunes footballeurs de 15 ans, mené sur quatre mois au sein d'un club amateur.Pas de théorie hors-sol. Du terrain, du vécu, des ajustements réels.Je raconte comment le lien, la posture, le cadre et la cohésion deviennent les véritables leviers de progression à cet âge.On parle gestion des émotions, focus, rapport à l'erreur, esprit d'équipe, récupération mentale… et de ces petits détails qui changent profondément l'état d'esprit collectif.À travers des exemples concrets (remplaçants, gardien, pression des matchs, routines de calme, engagement), je montre pourquoi la préparation mentale n'est pas une baguette magique, mais un entraînement progressif qui développe autonomie, lucidité et plaisir de jouer.Un épisode utile pour les coachs, préparateurs mentaux, parents, et toutes celles et ceux qui accompagnent des jeunes dans des contextes exigeants.Parce qu'au-delà du football, ce travail construit des bases solides pour la vie ⚽
Pascal Praud revient pendant deux heures, sans concession, sur tous les sujets qui font l'actualité. Vous voulez réagir ? Appelez le 01.80.20.39.21 (numéro non surtaxé) ou rendez-vous sur les réseaux sociaux d'Europe 1 pour livrer votre opinion et débattre sur les grandes thématiques développées dans l'émission du jour.Vous voulez réagir ? Appelez-le 01.80.20.39.21 (numéro non surtaxé) ou rendez-vous sur les réseaux sociaux d'Europe 1 pour livrer votre opinion et débattre sur grandes thématiques développées dans l'émission du jour.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Dans l'univers du coaching , de la préparation mentale et plus globalement du développement personnel il est essentiel de savoir à qui l'on fait confiance.Entre promesses rapides et discours séduisants, garder sa liberté de pensée devient un vrai enjeu.Dans cet épisode, je partage 5 repères concrets pour détecter un gourou du web, issus de mon expérience de formateur en préparation mentale dans la Police nationale.Un épisode pour développer votre esprit critique, choisir un accompagnement éthique et avancer avec plus de clarté.Prenez ce qui est bon pour vous.Belle écoute.Geoffrey
» Produced by Hack You Media: pioneering a new category of content at the intersection of health performance, entrepreneurship and cognitive optimisationInstagram: https://www.instagram.com/hackyoumedia/Website: https://hackyou.media/Charlie Morgan dropped out of a business degree when he realised none of his lecturers had ever run companies, worked four jobs burning himself out to prove he wasn't a failure, then discovered sales through an apprenticeship that changed everything.You'll hear why he made his Academy completely free to create a moat no one can compete with, how Hormozi owns the truth about offers the same way Charlie now owns agency fundamentals, and what happens when you realise “passive income” means working 16 hours a day so you can make money while you sleep.Tune in for his take on systemising relationships with spreadsheets, why he told his girlfriend on the first date that work always comes first, and how building B2B software makes the info business look like a walk in the park.00:00 Introduction04:38 Making a free course as a moat against competitors07:00 The arms race of flex marketing and selling the dream10:39 How Tai Lopez and Lord Sugar nudged Charlie off the uni path16:42 Burning out after juggling four jobs and chasing redemption21:33 Becoming a PT and growing fast as a fitness OG on Instagram24:33 The viral Dubai video and its polarising reactions29:52 Why sudden wealth and no guardrails can derail your growth33:21 Realising nice things don't equal happiness or fulfilment36:33 When you're financially set in your 20s, then what?42:26 Turning obsession into output and how love for work evolves46:45 Gym as therapy, consistency, and reclaiming power through strength50:59 Content, gaming addiction, and transferring energy into building54:43 Saturation, short-form fatigue, and the trap of constant content59:32 Cancel culture fading and leaning more into your true self01:04:01 Collaborating with controversial guests and audience backlash01:08:08 Why negativity often hides insecurity01:18:40 Balancing work obsession with a relationship01:25:12 Kids, priorities, and choosing the right partner for legacy01:30:33 Making real friendships after success and the Dubai filter01:33:44 Is London really that bad? Life post-Dubai and recalibrating safety01:36:38 Building software to solve your own business pain points» Escape the 9-5 and build your dream life: https://www.digitalplaybook.net/» Transform your physique: https://www.thrstapp.com/» My clothing brand, THRST: https://thrstofficial.com» Custom Bioniq supplements: https://www.bioniq.com/mikethurston• 40% off your first month of Bioniq GO• 20% off your first month of Bioniq PRO» Join our newsletter for actionable insights from every episode:https://thrst-letter.beehiiv.com/» Join Whoop and get your first month for free:https://join.whoop.com/FirstThingsThrst» Follow CharlieInstagram: https://www.instagram.com/charliemorganbiz/?hl=enAcademy: https://www.skool.com/academy/about
Gavon introduces the new season, focused on cultural intelligence, exploring the implications of cultural saturation and the post-hype economy. He emphasizes the importance of understanding risk and opportunity in branding, moving beyond mere visibility to build lasting cultural relevance. The episode outlines the structure for the upcoming season, including discussions on the state of hype, the trust gap, and the concept of ritual brands.Find TRH on Substack: https://righthype.substack.com/publish/home
In this episode, Dr. Bruno Basso of CIBO Technologies sheds light on how soil sequesters carbon and what happens when soil becomes saturated with carbon. Subscribe for more content on sustainable farming, market farming tips, and business insights! Get market farming tools, seeds, and supplies at Modern Grower. Follow Modern Grower: Instagram Instagram Listen to other podcasts on the Modern Grower Podcast Network: Carrot Cashflow Farm Small Farm Smart Farm Small Farm Smart Daily The Growing Microgreens Podcast The Urban Farmer Podcast The Rookie Farmer Podcast In Search of Soil Podcast Check out Diego's books: Sell Everything You Grow on Amazon Ready Farmer One on Amazon **** Modern Grower and Diego Footer participate in the Amazon Services LLC. Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.
Mixing Music with Dee Kei | Audio Production, Technical Tips, & Mindset
JOIN OUR PATREON AND GET ACCESS TO EXCLUSIVE CONTENT: https://mixingmusicpodcast.com/exclusiveI WRITE BOOKS FOR CHILDREN: https://deekeiandkayoko.comHIRE DEE KEI: links.deekeimixes.comHIRE LU: https://soundbetter.com/profiles/1419...Hire James: https://www.jamesparrishmixes.com/Find Dee Kei and Lu on Social Media:Instagram: @DeeKeiMixes @masteredbyluTwitter: @DeeKeiMixes @masteredbyluJoin the ‘Mixing Music Podcast' Discord: / discord The Mixing Music Podcast is sponsored by Izotope, Antares (Auto Tune), Sweetwater, Plugin Boutique, Lauten Audio, Filepass, & CanvaThe Mixing Music Podcast is a video and audio series on the art of music production and post-production. Dee Kei, Lu, and James are professionals in the Los Angeles music industry having worked with names like Odetari, 6arelyhuman, Trey Songz, Keyshia Cole, Benny the Butcher, carolesdaughter, Crying City, Daphne Loves Derby, Natalie Jane, charlieonnafriday, bludnymph, Lay Bankz, Rico Nasty, Ayesha Erotica, ATEEZ, Dizzy Wright, Kanye West, Blackway, The Game, Dylan Espeseth, Tara Yummy, Asteria, Kets4eki, Shaquille O'Neal, Republic Records, Interscope Records, Arista Records, Position Music, Capital Records, Mercury Records, Universal Music Group, apg, Hive Music, Sony Music, and many others.This podcast is meant to be used for educational purposes only. This show was filmed and recorded at Dee Kei's private studio in North Hollywood, California. If you would like to sponsor the show, please email us at mixingmusicpodcast@gmail.com.Support this podcast at — https://redcircle.com/mixing-music-music-production-audio-engineering-and-music/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Pensiez-vous que les accidents de désaturation étaient réservés aux plongeurs avec bouteilles ? Détrompez-vous. Le Dr Mathieu Coulange nous explique pourquoi les apnéistes, eux aussi, peuvent "faire des bulles".On explore le phénomène connu sous le nom de "Taravana syndrome", observé historiquement chez les pêcheurs de perles qui enchaînaient les descentes profondes sans laisser le temps à l'azote de s'éliminer.
In this episode of Christian Formation and Homeschooling in a Distracted Age, Brendon Naicker addresses the immense influence of modern media on children and the urgent need for Christian parents to cultivate discernment in a digital culture.Brendon explores how news cycles, entertainment, and social platforms shape identity and values, often more powerfully than traditional education or even church involvement. He challenges parents to recognise that media is not neutral—it forms beliefs, desires, and worldviews.Rather than responding with fear or withdrawal, Brendon presents networked homeschooling as a means to teach children wisdom, critical thinking, and biblical discernment within community. The episode highlights the role of Solavia.UK in helping families navigate media pressures together and raise children who can stand firm in Christ amidst a saturated digital age.#ChristianHomeschooling #SolaviaUK #BrendonNaicker #HomeEducationUK #MediaWiseKids #ChristianParents #DigitalDiscipleship #HomeschoolNetwork #FaithFormation #IntentionalParenting #CounterCulturalParenting #RaisingDisciples #LivingTheology #UKHomeschooling #DiscerningChildren #ChristCentredEducation #KingdomEducation
This research examines the psychological impact of saturation diving, proposing an "Underview Effect" similar to the cognitive shifts experienced by astronauts in space. By interviewing aquanauts who lived in undersea habitats, the authors identified themes of awe, tranquility, and flow that arise from extended immersion in a high-pressure environment. Participants reported a heightened sense of planetary fragility and a deep interconnectedness with marine ecosystems, often viewing the ocean as a complex, living "home base" rather than a foreign workspace. While the experience involves significant physical hardship and sensory changes, it frequently results in a long-lasting environmental stewardship and a transformed scientific perspective. Ultimately, the study suggests that this profound submerged experience can reshape human relationships with the natural world and inspire pro-environmental behaviors.#UnderviewEffect #SaturationDiving #AquanautPsychology #UnderseaAwe #PsychologicalImpactOfDiving #FlowStateUnderwater #AweAndTranquility #OceanStewardship #MarineInterconnectedness #PlanetaryFragility #ProEnvironmentalBehavior #TransformedByTheDeep #OceanAsHomehttp://atlantisseacolony.com/https://www.patreon.com/atlantisseacolonyhttps://discord.gg/jp5aSSkfNS
Dans ce troisième épisode de notre série “médecine et plongée”, nous abordons l'un des sujets qui inquiètent le plus les plongeurs : l'accident de désaturation, souvent confondu avec l'accident de décompression. Le sujet qui fait peur à tous les plongeurs… mais qui ne devrait plus vous angoisser après cet épisode !Que se passe-t-il exactement dans notre corps quand les bulles d'azote s'invitent dans la circulation sanguine ? Et surtout, comment les éviter… ou bien réagir quand elles sont déjà là ?Avec le docteur Mathieu Coulange, médecin hyperbare et spécialiste de la plongée, on remonte pas à pas le chemin des bulles : saturation des tissus en profondeur, formation de micro-bulles à la remontée, atteintes possibles de la moelle épinière, du poumon, du cerveau, de l'oreille interne, des articulations ou de la peau. Loin des discours alarmistes, Mathieu explique pourquoi ces accidents restent rares et comment une prise en charge rapide permet, dans la grande majorité des cas, une récupération complète.On détaille ensemble les signes qui doivent alerter après une plongée : fatigue inhabituelle qui s'aggrave, fourmillements, troubles neurologiques, douleurs articulaires retardées, plaques cutanées, vertiges… et la fameuse règle des 24 heures suivant la plongée. À partir de quand s'inquiéter ? Quand appeler les secours ? Pourquoi l'oxygène pur au masque et l'hydratation sont-ils des gestes absolument essentiels, parfois plus prioritaires que la course au caisson hyperbare ?L'épisode revient aussi sur la prévention : intérêt du palier de sécurité même quand il n'est pas “obligatoire”, enchaînement de plusieurs plongées dans une journée, limites de certains ordinateurs, rôle des mélanges nitrox, mais aussi précautions particulières pour les plongeurs plus âgés ou présentant des facteurs de risque cardiovasculaire. On parle enfin du foramen ovale perméable (FOP) et de ces efforts à glotte fermée qui peuvent, dans de rares cas, favoriser le passage des bulles vers le cerveau.Un épisode à écouter avant vos prochaines plongées profondes pour mieux comprendre ce qui se joue dans votre organisme… et plonger plus serein.⚠️ Avertissement importantLes informations présentées dans cet épisode sont fournies à titre général et pédagogique. Elles ne constituent en aucun cas un avis médical personnalisé et ne sauraient se substituer à une consultation auprès d'un professionnel de santé, ni aux formations reconnues en plongée sous-marine.En cas de symptôme, de malaise ou de doute après une plongée, consultez sans délai un médecin (idéalement formé à la médecine de plongée) ou un service d'urgence, et suivez les procédures de sécurité recommandées par vos organismes de formation.Ni les auteurs du podcast, ni l'invité ne pourront être tenus responsables de décisions prises sur la seule base des informations diffusées dans cet épisode.
High Timeline Living Website:https://www.hightimelineliving.com/Fun Astrology YouTube Channel:https://www.youtube.com/@funastrologypodcastBuy Thomas a Coffee!https://www.buymeacoffee.com/funastrologyThank you!Join the Fun Astrology Lucky Stars Club Here!Old Soul / New Soul Podcast - Back Episodes:https://www.buzzsprout.com/2190199https://www.youtube.com/@OldSoulNewSoulAstrologyPodcast
In this episode of the Business of Aesthetics Podcast, host Don Adeesha is joined by Rebecca Landriault, CEO of Apex Aesthetic Consulting, to tackle the reality of operating in a hyper-competitive, high-density market. As the industry shifts away from a "growth at all costs" mindset, Rebecca argues that 2026 will be defined by operational discipline and capital efficiency. She challenges owners to pivot from an "acquisition obsession" to a mastery of retention, warning that in a saturated landscape, differentiation comes not from the newest device, but from comprehensive, lifetime treatment planning. A major focus is identifying the "silent capacity killers" that cause revenue plateaus. Rebecca reveals that the bottleneck is rarely marketing, but often lies in underutilized providers and an untrained front desk unable to credential services. She provides a strict financial framework for staffing, advising that no new revenue-generating hires should be made until existing providers are generating 5x their payroll and are booked 80% of the time. Furthermore, she dissects the "napkin math" of capital equipment sales, urging owners to calculate true ROI based on existing patient volume rather than hypothetical growth before signing any lease. From a strategic perspective, Rebecca redefines the concept of scaling, asserting that "growth is not expansion, it is a duplication of excellence". She cautions against the financial collapse often caused by premature scaling, advising that a practice must achieve a 25% net profit margin and hold six months of cash reserves before considering a second location. Finally, she offers a compelling analogy for membership models, positioning the provider as the "dentist" and the membership as the "toothbrush," to ensure patients protect their investment in high-ticket regenerative procedures through consistent maintenance.
Tom Szulist, Innocence Cannabis, on possible saturation of cannabis market full 329 Tue, 09 Dec 2025 08:44:00 +0000 wzXnK8MeetURrrRLdAdgi2SDjZcyJEEj news & politics,news WBEN Extras news & politics,news Tom Szulist, Innocence Cannabis, on possible saturation of cannabis market Archive of various reports and news events 2024 © 2021 Audacy, Inc. News & Politics News False https://player.amp
Welcome back to the show!! This week Brainstorm and Eulise are discussing different parenting dynamics that are seen today; the situation that happened in Chicago with the kids attacking a parent and her child; why it's sometimes best to not get into conflict; and also social media dynamics. Enjoy this week's show!! Follow Brainstorm on IG, X, and YouTube: @djbrainstorm4u Follow Eulise on IG: @_eulisedickerson
In this episode of The Women On Top, Valerie Lynn sits down with Dylan Jahraus, powerhouse Etsy seller, coach, and former corporate leader, for a deeply honest conversation about ambition, resilience, and building a life on your own terms.Dylan opens up about what pushed her to walk away from the corporate path and how she turned a simple Etsy shop into a multi-seven-figure business. She shares the real challenges behind the highlight reel, the role community played in her growth, and the mindset shifts that helped her navigate doubt, burnout, and big leaps.Together, Valerie and Dylan dig into the strategies that actually move the needle in e-commerce, from understanding customer behavior to staying ahead of trends, and why consistency still outperforms perfection. Dylan also talks about the personal “why” that fuels her, the legacy she hopes to leave for her daughters, and the quiet power of choosing flexibility, freedom, and self-trust.This episode is equal parts tactical and deeply human, offering practical steps for aspiring entrepreneurs and heartfelt encouragement for anyone ready to chase a bigger vision.Chapters00:00 – The Journey Begins: From Corporate to Etsy Success 02:53 – Building an Empire: Overcoming Challenges & Customer Relations 05:58 – E-Commerce Explained: Strategies That Actually Work 09:13 – Understanding Customer Needs: The Key to Longevity 12:09 – Trends & Market Research: Staying Ahead 14:50 – Facing Doubts: The Importance of a Strong “Why” 18:03 – Creating a Legacy: Personal Stories & Motivations 20:52 – The Power of Flexibility: Owning Your Time 23:58 – Saturation & Strategy: Standing Out in a Crowded Market 26:04 – The Power of Consistency 27:40 – Practical Strategies for Sustainable Growth 30:16 – Understanding Digital Products 31:22 – Aligning Skills With Your Business Ideas 33:17 – Finding Fulfillment in Entrepreneurship 36:11 – Building Confidence Through Action 37:38 – How Long It Really Takes to See Success on Etsy 39:13 – Learning From Mentors 40:43 – Balancing Business & Family 43:04 – Dreaming Bigger: Future Aspirations & GrowthConnect with Dylan: The Ultimate Etsy Course: https://dylanjahraus.com/ Instagram: https://www.instagram.com/dylanjahraus/ YouTube: https://www.youtube.com/channel/UCeO8Gmc2B-3G2fgcFnRR4Xw Podcast: https://podcasts.apple.com/us/podcast/etsy-seller-success-tips-for-starting-growing-and/id1647518076Connect with The Women On Top: Follow The Women On Top Podcast on Apple, Spotify, or anywhere you get your podcasts. Subscribe for more empowering conversations and stories! Website: https://thewomenontop.com/YouTube: https://www.youtube.com/ @thewomenontop Instagram: https://www.instagram.com/thewomenontoppodcast/LinkedIn: https://www.linkedin.com/in/valerie-lynn/
We live in the age of information. With all its wonderful benefits, it also comes with a subtle erosion to the way of Jesus. It's far too easy to hear all that Jesus has to say and simultaneously do nothing with His teaching. But Jesus offers a stern warning and a shocking visual for anyone who would continue in that pattern and approach in relationship with Him. While there is a strong warning, there is also immense hope for those who heed it. What will we do with it? CITY CHURCH EXISTS TO HELP PEOPLE FIND THEIR WAY TO GOD FROM WHERE THEY ARE. You can find us here: www.citychurchboulder.com www.facebook.com/citychurchboulder www.instagram.com/citychurchboulder
Send us a textIn this episode, Teryn Darling sits down with industry veteran Robin Hays Velez, a PMU artist with over 32 years of experience, to talk about what it really takes to last in permanent makeup. From pigments and needle choices to burnout, ethics, and regulators… nothing is off the table.If you've ever wondered what's true, what's hype, and what actually matters for long-term success in PMU, this one's for you.What You'll LearnPigment science: The real differences between iron oxide and organic/carbon pigments and when to use which.Eyeliner tattooing: Why Robin sometimes refuses carbon and how she works safely on mature or crepey lids.Needle mastery: Liners vs shaders, diameters, tapers and why one needle can't do it all.Saturation levels: How to achieve low, medium, or high saturation results without over-traumatizing the skin.Industry ethics & longevity: Setting boundaries, avoiding burnout, and staying educated through the noise of social media.
The guys hear from more of the people, this time on the saturation topic.
Valenti wants to know if sports have become too saturated.
The longtime pals Jon Kelly and Peter Hamby reunite to discuss reports of a Vox Media spin-off sitch—itself a potential micro-micro-microcosm of the WBD deal. And then they turn their attention to Netflix's scheme for combatting its current saturation in the U.S. and Canada. See all the ways bp is investing in America at bp.com/InvestingInAmerica . . . To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
In this episode, agricultural systems scientist Dr. Bruno Basso of CIBO Technologies sheds light on what it means for soil to be carbon saturated and what it mean for the soil to lose its carbon. Subscribe for more content on sustainable farming, market farming tips, and business insights! Get market farming tools, seeds, and supplies at Modern Grower. Follow Modern Grower: Instagram Instagram Listen to other podcasts on the Modern Grower Podcast Network: Carrot Cashflow Farm Small Farm Smart Farm Small Farm Smart Daily The Growing Microgreens Podcast The Urban Farmer Podcast The Rookie Farmer Podcast In Search of Soil Podcast Check out Diego's books: Sell Everything You Grow on Amazon Ready Farmer One on Amazon **** Modern Grower and Diego Footer participate in the Amazon Services LLC. Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.
Survey data show third-party delivery ads are hitting a saturation point and 45% of chains expect to invest less in the channel in 2026, Bikky Co-Founder Abhinav Kapur tells Bloomberg Intelligence. In this episode of Choppin’ It Up, Kapur sits down with BI’s senior restaurant and foodservice analyst, Michael Halen, to discuss the findings of Bikky’s chief marketing officer survey. He also comments on US consumer spending, loyalty promotions, personalization and shrinking marketing budgets. Listen to this episode on Apple Podcasts and Spotify.See omnystudio.com/listener for privacy information.
Jonny and Heather hike into the landscape of major media stories dropped in just the last week. We know President Trump saturates the airwaves in order to both benefit from the attention economy and distract from his administration's more nefarious moves. Using Heather's background in Army Intelligence, they look for patterns across the stories, and they find evidence of severe vulnerabilities. In the back half of the show they examine the two main competing strategies in the opposition and which they think is likely to be more successful and better for the LGBTQ+ community.
Ever twist a saturation knob and wonder if you're hearing compression, distortion, or something in between? In this episode of Inside The Mix, Marc Matthews puts that question to the test with a clean, scientific setup, a 440 Hz sine wave, the Softube Saturation Knob, and Wave Observer, a free oscilloscope plugin by Press Play.By placing Wave Observer last in the signal chain, Marc visually shows how your waveform changes as you dial in saturation, how rounded peaks flatten, harmonics stack up, and a pure sine wave slowly edges toward a square. No more guessing, no more placebo, just a clear visual of how your favourite plugins reshape the sound.Marc explains why visual feedback matters when subtle processing tricks your ears, and walks you through a simple DIY method you can try in any DAW. You'll see exactly what happens around -12 dBFS, where soft saturation tightens dynamics long before the audible grit appears.This quick session helps you connect what you hear to what you see — so you can mix faster, gain stage with intention, and start trusting your ears with confidence.Takeaways:How to use Wave Observer for real-time saturation analysisWhat clipping actually looks likeA repeatable workflow for plugin testing and calibrationIf you're ready to stop mixing blind and start seeing your decisions pay off, on meters, waveforms, and final masters — this one's for you.Subscribe, share the episode with a producer friend, and drop Marc a note with the next plugin you want analysed. Your suggestion might feature in a future episode of Inside The Mix.Links mentioned in this episode:Press Play Wave ObserverFREE Plugin To See Inside Your Mixes - Press Play Wave ObserverSend me a message Support the showWays to connect with Marc: Listener Feedback Survey - tell me what YOU want in 2026 Radio-ready mixes start here - get the FREE weekly tips Book your FREE Music Breakthrough Strategy Call Follow Marc's Socials: Instagram | YouTube | Synth Music Mastering Thanks for listening!! Try Riverside for FREE
Stephanie C. DeMasi, MD, joins CHEST® Journal Podcast Moderator, Matt Siuba, DO, MS, to discuss her research comparing neurologic outcomes between lower and higher oxygen saturation targets following cardiac arrest. DOI: 10.1016/j.chest.2025.04.027 Disclaimer: The purpose of this activity is to expand the reach of CHEST content through awareness, critique, and discussion. All articles have undergone peer review for methodologic rigor and audience relevance. Any views asserted are those of the speakers and are not endorsed by CHEST. Listeners should be aware that speakers' opinions may vary and are advised to read the full corresponding journal article(s) for complete context. This content should not be used as a basis for medical advice or treatment, nor should it substitute the judgment used by clinicians in the practice of evidence-based medicine.
In this Hosting Hotline episode, Sarah and Annette take on a question from a new host who's been live on Airbnb for a month without a single booking. Frustrated, they're wondering if the problem is market saturation, high guest fees, or simply the fact that the neighbor's house has a pool.If you've ever launched a property and heard crickets, you're not alone. Sarah and Annette walk through the mindset shift and tactical steps every host needs when bookings aren't rolling in:Stop blaming what you can't control. Saturation, guest fees, and neighbor amenities aren't the full story. Hosts need to focus on the levers they can control.Audit your listing presentation. Are your photos professional, clear, and guest-focused? Is your description highlighting unique value? Is your calendar even open and bookable?Identify true comps. The house with a pool isn't a fair comparison if you don't have one. Focus on properties with similar size, amenities, and guest profiles.Read your analytics. Airbnb's insights show where guests drop off, whether you're showing up in search, and what your conversion rate looks like. Data removes guesswork.Adopt an owner's mindset. Hosting isn't passive income — it's active business ownership. Treat your listing like a product that needs to be tested, tweaked, and improved.At the heart of this episode is a challenge to hosts: take extreme ownership. If your listing isn't getting booked, don't wait for Airbnb's algorithm to save you. The hosts who succeed are the ones who audit their listing, study their competition, and make data-driven changes until they get results.Resources mentioned: YouTube Video: The Secret Airbnb Setting That Improves Listings by 300% – Deep dive into Airbnb settings that can boost visibility and conversions.Mentioned in this episode:Lodgify | Use code TFV20
Inspectah Nick and the mono moto James Powell breakdown the media saturation in podcast land while also taking a side quest to discuss side pieces and X-Men! Enjoy!Remember to email us at comicconspodcast@gmail.comFollow us @comicconspodcast (instagram)
How do you define luxury? What if it isn't about white tablecloths—but about meeting every guest, every need, every time?Aaron Bludorn didn't just open a restaurant—he opened a new playbook for hospitality. After a business education under Daniel Boulud and Gavin Kaysen, Aaron took a massive bet: move to Houston, build a concept mid-pandemic, and serve a community he was still getting to know. And that gamble paid off.In this episode, he shares how Bludorn became a “Swiss army knife” of occasion—able to host everything from business dinners to caviar-and-burgers at the bar. He also shares how flexibility became a growth engine and how promoting from within turned his group into a magnet for top-tier talent.This is for operators who want to scale without losing soul—and win by design, not default.To explore his restaurant, visit bludornrestaurant.com _________________________________________________________Free 5-Day Restaurant Marketing Masterclass – This is a live training where you'll learn the exact campaigns Josh has built and tested in real restaurants to attract new guests, increase visit frequency, and generate sales on demand. Save your spot at restaurantbusinessschool.comFull Comp is brought to you by Yelp for Restaurants: In July 2020, a few hundred employees formed Yelp for Restaurants. Our goal is to build tools that help restaurateurs do more with limited time.We have a lot more content coming your way! Be sure to check out our other content:Yelp for Restaurants PodcastsRestaurant expert videos & webinars
Corey is a versatile composer with a blend of unique artistry mixed with a deep understanding of cinematic music. Over a decade now in the business, he got his start writing for motion picture advertising, landing his music in spots for Star Wars, Concussion, The Incredibles 2, and countless other projects. His breakthrough as a film composer came in 2023, with his original music in HBO's Last Stop Larrimah, produced by Duplass Brothers Productions which went #1 on Netflix in Australia. Later that year he scored the Taika Waititi directed short film “The Lost Voice” in collaboration with Apple. More recently, he served as the composer for Green & Gold, an indie film that won the Audience Choice Award at both Austin Film Festival and Heartland International Film Festival. He frequently collaborates with many brands including Banana Republic, with whom he recently wrote the music behind the Banana Republic x White Lotus Collection in partnership with HBO. He is represented by Warner Chappell. Connect with Corey:➡️ Insta: @coreymartincomposer➡️ TikTok: @coreymartincomposerwww.coreymartinmusic.comAbout The Lot1 Podcast ✨The Lot1 Podcast is designed for anyone who is interested in or working in filmmaking. Whether you're just starting out or a seasoned veteran, we hope you gain the knowledge you need to improve your craft, achieve your filmmaking goals, or simply get an understanding and appreciation for the roles and duties of your peers and colleagues.
In this episode of the Pastor to Pioneer podcast, Britton hosts Brad Pickens and Eric Green, who share their transformative journey from being church elders in a mega church to establishing a network of micro churches focused on discipleship and community. They discuss their personal faith journeys, the challenges of transitioning from traditional church models, and the importance of fostering intimate relationships within their new church context. The conversation highlights the need for a shift in discipleship dynamics, emphasizing the role of families and the community in spiritual growth.
"Identify where you don't want to go." Connect With Our SponsorsGreyFinch - https://greyfinch.com/jillallen/A-Dec - https://www.a-dec.com/orthodonticsSmileSuite - http://getsmilesuite.com/ Summary In this engaging conversation, Jill chats with Kent Miller to dive into the intricacies of demographics and market analysis within the dental industry, particularly focusing on orthodontic practices. Kent shares his journey from urban planning to founding Dentagraphics, emphasizing the importance of understanding market needs, sustainability, and competition when planning for startups or acquisitions. They discuss the significance of evaluating potential locations, the role of data in decision-making, and the innovative tools offered by Dentagraphics to assist practitioners in making informed choices. Connect With Our Guest Dentagraphics - https://www.dentagraphics.com/ Takeaways Kent Miller is the founder of Dentagraphics, specializing in market analysis for the dental and orthodontic industry.Understanding the market for care is crucial for orthodontic practices.Sustainability and alignment with personal vision are key for practice success.Saturation in a market does not necessarily mean failure for practices.Identifying areas to avoid is as important as finding good locations.New construction does not guarantee growth; infrastructure matters.The right demographics must align with the practice's target audience.Data should inform decisions, but it is not the only factor to consider.Dentagraphics offers innovative tools for demographic analysis and market insights.Entrepreneurship in the dental field requires careful planning and data-driven decisions.Chapters 00:00 Introduction to Kent Miller and Dentagraphics03:09 Understanding Market Analysis in Orthodontics06:00 Key Concepts for Startup and Acquisition Planning09:01 Evaluating Potential Locations for Practices12:06 The Role of Real Estate in Practice Success15:03 Analyzing Competition and Market Dynamics18:14 Metrics for Success in Orthodontic Practices22:25 Understanding Demographics in Orthodontics26:07 The Importance of Growth and Infrastructure30:29 Navigating Urban vs. Suburban Practices34:40 Data-Driven Decision Making for Practices38:16 Innovative Tools for Demographic Analysis43:05 Final Thoughts and ResourcesEpisode Credits: Hosted by Jill AllenProduced by Jordann KillionAudio Engineering by Garrett LuceroAre you ready to start a practice of your own? Do you need a fresh set of eyes or some advice in your existing practice?Reach out to me- www.practiceresults.com. If you like what we are doing here on Hey Docs! and want to hear more of this awesome content, give us a 5-star Rating on your preferred listening platform and subscribe to our show so you never miss an episode. New episodes drop every Thursday!