Podcasts about Recognition

  • 6,967PODCASTS
  • 11,112EPISODES
  • 37mAVG DURATION
  • 1DAILY NEW EPISODE
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




Best podcasts about Recognition

Show all podcasts related to recognition

Latest podcast episodes about Recognition

Order of Man
Men Have Lost Initiation | FRIDAY FIELD NOTES

Order of Man

Play Episode Listen Later Feb 13, 2026 25:53


Men have lost initiation. There was a time when boys were tested before becoming men. They faced hardship. They endured pain. They returned changed - recognized as men. Today, that process is gone. In this episode of Friday Field Notes, Ryan Michler breaks down: Why modern men feel lost The three elements of ancient initiation Why pain is necessary How comfort culture is weakening men Why nobody is coming to initiate you How to initiate yourself If you feel stuck, drifting, or chasing validation - this episode is for you. SHOW HIGHLIGHTS 00:00 Men Have Lost Initiation 00:26 When Boys Knew They Became Men 02:50 The Cost of Extended Adolescence 05:10 Comfort Culture Is Weakening Men 07:25 "Where Have All the Real Men Gone?" 08:05 The Three Elements of Initiation 08:30 Removed From Comfort 09:24 The Benefit of Pain 11:15 The Trial: Facing Fear 12:56 Recognition and Acknowledgment 14:49 Drifting, Chasing, and the Integrity Gap 16:35 Nobody Is Coming to Initiate You 17:14 Voluntary Hardship 18:45 What Initiation Looks Like Today 19:45 Brotherhood Is Modern Initiation 20:51 The World Needs Grounded Men 21:15 Questions Every Man Must Ask Himself 22:21 Do Something Difficult This Week 23:47 Steadiness Over Loudness 24:35 Choose the Fire 25:00 Join the Iron Council 25:20 The Men's Forge Event 26:00 Final Challenge   Battle Planners: Pick yours up today! Order Ryan's new book, The Masculinity Manifesto. For more information on the Iron Council brotherhood. Want maximum health, wealth, relationships, and abundance in your life? Sign up for our free course, 30 Days to Battle Ready  

Leveraging Thought Leadership with Peter Winick
Story Precision: The New PR Advantage | 695 | KJ Blattenbauer

Leveraging Thought Leadership with Peter Winick

Play Episode Listen Later Feb 12, 2026 20:23


What if "getting PR" isn't about hype at all—but about engineering trust at scale? In this episode, Peter Winick sits down with KJ Blattenbauer, founder of Hearsay PR and author of Pitchworthy: The No-Fluff Playbook to Publicity That Pays Off, who helps founders, creatives, and experts turn clear storytelling and smart media strategy into real authority—without the fluff.   She breaks down what PR actually does: find the story behind your expertise, explain why it matters now, and package it for real-world attention spans. KJ makes the case that your work doesn't "speak for itself" anymore. Not in a market where everyone is being commoditized and AI is accelerating sameness. You still need great work. But you also need amplification. And you need it across the channels where your buyers learn, compare, and decide. We get practical about what "good PR" looks like when you're building a thought leadership platform. Not one hit. Not one logo. Repetition that compounds. One appearance leads to the next. Visibility builds recognition. Recognition builds preference. It's the gym, not the lottery. KJ also brings discipline to measurement. Systems first. Message alignment across platforms. Tracking links so you know what's working and where demand is coming from. Because "branding" is not a strategy when you're accountable for revenue. And if "promotion" makes you cringe, this part matters: KJ reframes PR as service. If your ideas can help people, hiding them is the real ego play. The goal isn't fame. It's getting your work into the rooms where it can do its job. Finally, we tackle the AI question. KJ's take is sharp: AI can support systems and repurposing, but the human story is the differentiator—and audiences are hungry for it.   Three Key Takeaways: • Your work won't speak for itself—amplification is part of the job. Do good work, yes. But you have to shepherd it into the right rooms, at the right time, with the right message. PR is the tool that helps that happen • Authority is built by consistency, not a one-time splash. Waiting until you "have something to promote" costs you money, recognition, and momentum. Start now. Show up regularly. Trust compounds when people see your ideas repeatedly across formats.  • PR is story + packaging for short attention spans—and it can't be a black box. The core job is uncovering what's interesting about your expertise, why it matters now, and presenting it in a way people will actually pay attention to. Then put systems around it (including tracking) so it ties back to real outcomes. If this episode got you thinking about amplifying expertise into authority, go cue up Episode 13 with Pete Weisman next. You'll get a practical playbook for turning strong ideas into executive-level visibility—including how to diversify your offerings, focus your audience, and claim a clear niche so your thought leadership lands with the people who can say "yes." It aligns perfectly with the themes you just heard: amplification over hoping, consistency over one-off wins, and strategy over random activity—all aimed at building recognition that actually supports growth.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Uplevel Dairy Podcast
310 | Dr. Joe Lineweaver: A Lifetime of Innovation in Dairy Reproduction

Uplevel Dairy Podcast

Play Episode Listen Later Feb 12, 2026 22:29


In this episode of the Uplevel Dairy Podcast, Peggy Coffeen interviews Dr. Joe Lineweaver, a distinguished figure in the field of reproductive physiology and embryo transfer. Dr. Lineweaver shares his remarkable journey from his roots in Virginia to his academic and professional accomplishments, including his work at Virginia Tech and the founding of Blue Ridge Embryos. He discusses the early challenges and triumphs in embryo transfer technology, his contributions to the development of industry standards, and his dedication to mentoring the next generation. Joe's passion for improving dairy cattle genetics, particularly in Jersey cows, shines through, culminating in his recognition as the 2025 Dairy Shrine Pioneer of the Year.In this episode of the Uplevel Dairy Podcast, Peggy sits down with Dr. Scott Armbrust, president and owner of Paradocs Embryo Transfer, to delve into his impactful career in dairy cattle genetics and veterinary medicine. Dr. Armbrust shares his journey from growing up on a Nebraska farm to becoming a pioneer in bovine embryo transfer. He discusses his early inspirations, significant contributions through embryo transfer, and his global influence in training veterinarians and advancing dairy genetics. The episode also highlights Dr. Armbrust's ethos on mentoring the next generation of veterinarians and his ongoing contributions to the dairy industry.This episode is brought to you in partnership with the National Dairy ShrineAward applications: https://dairyshrine.org/awards/Scholarship applications: https://dairyshrine.org/youth/#scholarDonate to Dairy Shrine: https://dairyshrine.org/donate/YouTube channel: https://youtube.com/@dairyshrineChapters01:11 Joe's Early Life and Education02:57 Journey into Embryo Transfer03:45 Challenges and Innovations in Embryo Transfer04:48 Career at Virginia Tech07:28 Founding Blue Ridge Embryos10:37 Leadership and Mentorship13:41 Scholarship Programs and Giving Back15:39 Reflections and Achievements20:04 Recognition and Final Thoughts

That Will Nevr Work Podcast
S7|E4 Your Guide to Finding Opportunity in Discomfort

That Will Nevr Work Podcast

Play Episode Listen Later Feb 12, 2026 11:09 Transcription Available


You'll explore how opportunities often appear in unexpected forms, challenging your preconceived notions of success. This episode helps you recognize subtle opportunities by embracing discomfort and acting with intention, even amidst uncertainty.In This Episode:00:00 Opportunity's Quiet Nature01:26 The Unfamiliarity of Opportunity03:43 Growth Demands Response05:48 Readiness: A Decision, Not a Feeling08:43 The Cost of Postponing OpportunityKey Takeaways:Recognize subtle opportunities that don't fit your expected molds.Distinguish between discomfort and actual danger in new situations you face.Cultivate curiosity, observation, and humility to understand opportunities fully.Decide to be ready rather than waiting for the feeling of readiness.Act with intention, as your readiness grows through movement, not hesitationResources:Well Why Not Workbook: https://bit.ly/authormauricechismPodmatch: https://bit.ly/joinpodmatchwithmaurice*FREE* 5 Bold Shifts to help you silence doubt and start moving: https://bit.ly/5boldshiftsConnect With:Maurice Chism: https://bit.ly/CoachMauriceWebsite: https://bit.ly/mauricechismTo be a guest: https://bit.ly/beaguestonthatwillnevrworkpodcastBusiness Email: mchism@chismgroup.netBusiness Address: PO Box 460, Secane, PA 19018Subscribe to That Will Nevr Work Podcast:Spreaker: https://bit.ly/TWNWSpreakerSupport the channelPurchase our apparel: https://bit.ly/ThatWillNevrWorkPodcastapparel 

Uplevel Dairy Podcast
309 | A Dairy Genomics Pioneer: Dr. George Wiggins

Uplevel Dairy Podcast

Play Episode Listen Later Feb 11, 2026 30:09


In this episode of the Uplevel Dairy Podcast, Peggy Coffeen sits down with Dr. George Wiggins to discuss his extensive career in dairy cattle genetics and genomics. Dr. Wiggins shares his journey from growing up on a dairy farm to working closely with Dr. Paul Van Radden, leading to significant contributions in genetic evaluations that propel the dairy industry. He highlights the transformative role of genomics in doubling genetic progress and improving dairy cattle productivity. Dr. Wiggins also touches upon his international experiences, including his time with the Peace Corps and the USDA, and reflects on the recognition he received as a Pioneer Award winner from the National Dairy Shrine. Throughout the conversation, the emphasis is on the importance of innovation, data accuracy, and continuous improvement in dairy genetics.00:50 Early Life and Influences01:43 Academic Journey and Mentorship05:01 International Experience and Career Decisions08:59 Return to Academia and USDA Career10:27 Advancements in Dairy Genetics12:39 Impact of Genomics24:43 Future of Dairy Genetics27:46 Recognition and ReflectionsAward applications: https://dairyshrine.org/awards/Scholarship applications: https://dairyshrine.org/youth/#scholarDonate to Dairy Shrine: https://dairyshrine.org/donate/ YouTube channel: https://youtube.com/@dairyshrine?si=dS_EVxaA1XhUXBhzInformation about webinarTopic: “Avoiding Burnout in a 24/7 Industry”Date: February 11, 2026Time: Noon CentralClick here to register: https://us06web.zoom.us/webinar/register/WN_eTGV4PLeTe2gI4np7Lrlzg

The Corelink Solution with James Rosseau, Sr.
207. Wingy Danejah: When Recognition Tests Faith

The Corelink Solution with James Rosseau, Sr.

Play Episode Listen Later Feb 10, 2026 49:57


Jumping over many hurdles on his path to finding Christ, Wingy Danejah shares how he embraced faith with his challenges. He reflects on his early life in Jamaica, the influence of his strict upbringing, and how he discovered his passion for music through poetry and performance. Wingy discusses his experiences touring with renowned artists like Beanie Man and Sean Paul, and the pivotal moment when he decided to dedicate his life to God, leading to a profound shift in his music and purpose. Wingy emphasizes the importance of unity, love, and transparency in both his personal life and music. He candidly addresses the struggles of transitioning from secular to gospel music, the challenges of judgment within the church, and the need for artists to support one another. Wingy also shares insights on managing public perception, the impact of his music on listeners, and his commitment to using his platform for good. Looking ahead into the future, Wingy expresses his desire to create music that reflects God's love and to give back to the community, emphasizing that his journey is more than just personal success, it's about uplifting others and spreading hope.

Her Best Self | Eating Disorders, ED Recovery Podcast, Disordered Eating, Relapse Prevention, Anorexic, Bulimic, Orthorexia
EP 268.5: If I Was Trapped in My Eating Disorder Right Now, Here's Exactly What I'd Do ~ The No BS Relapse Recovery Roadmap

Her Best Self | Eating Disorders, ED Recovery Podcast, Disordered Eating, Relapse Prevention, Anorexic, Bulimic, Orthorexia

Play Episode Listen Later Feb 10, 2026 22:02


The opposite of quitting is recommitting. And sometimes that means you need a spelled-out roadmap to help you define what steps you can take to recommit to recovery. Today's episode is different. I'm not speaking in theoretical terms or giving advice I wouldn't follow myself. I'm sharing exactly what I would do if I was trapped in an eating disorder right now. The actual steps. The concrete path forward. The golden nugget roadmap I would follow myself. Whether you're experiencing a relapse, stuck in your recovery, or wish you could go back and tell your younger self what to do—this episode is your clear, actionable guide. In this episode, you'll discover: The 6-step roadmap I'd follow if I was trapped in an eating disorder today Why relapse is normal and doesn't mean you've failed Step 1: Recognition and acceptance—how to get out of denial faster Step 2: Immediate outreach—breaking the isolation that keeps you stuck Step 3: Implementing structure—what to do RIGHT NOW to support yourself Step 4: Investigating triggers—what's really driving this beneath the surface Step 5: Developing a crisis response plan—how to create lasting recovery Step 6: Reconnecting with your WHY—the values your ED is violating What I wish I could tell my younger self 15+ years ago Why recovery isn't about perfection—it's about progress How to recommit to your best self starting TODAY If you're in the trenches, if you've relapsed, if you're struggling—this roadmap is for you. Not theory. Just honest, practical steps. THE 6-STEP RECOVERY ROADMAP STEP 1: RECOGNITION AND ACCEPTANCE The hardest step: Admitting where you are is no longer where you want to be. If I was relapsing today, I know I'd experience a strong pull toward denial. I might tell myself: "I'm just being more careful about what I eat" "I'm having a few bad days" "I can handle this on my own" What I'd do instead: ✅ Name what's happening - Get out of denial faster ✅ Ask myself: Am I skipping meals? Preoccupied with food thoughts? Anxious around mealtimes? Weighing myself? ✅ Practice self-compassion - Not excusing the behavior, but acknowledging eating disorders are complex illnesses, not personal failures ✅ Say to myself: "This is really hard. I don't have to do this alone." This step creates the foundation to move forward in ACTION instead of sitting in denial. STEP 2: IMMEDIATE OUTREACH Eating disorders thrive in isolation. My counter-attack would be CONNECTION. What I'd do: ✅ Contact someone I trust - In my case, my mom. I'd say: "I'm struggling with my thoughts and behaviors. I need support." ✅ Get professional help immediately If I had a treatment team: Contact them and say "I'm experiencing relapse. I need an appointment ASAP." If I didn't: Call primary care doctor, get a referral, look into local ED treatment centers ✅ Get accountability - Schedule meals, keep appointments with myself, check in with someone Key truth: Don't wait until things get "bad enough." Early intervention makes a tremendous difference. Breaking isolation doesn't mean everyone needs to know. It means strategically connecting with people who can provide support. STEP 3: IMPLEMENTING STRUCTURE What I'd put in place immediately: ✅ Regular eating patterns - Have a plan ready, no reinventing the wheel during vulnerable times. Use the same meals daily to reduce decision fatigue. ✅ Clean up social media & entertainment Unfollow accounts that trigger comparison or food obsession Avoid shows glorifying thinness or dieting Curate recovery-supportive content Join communities like Her Best Self Society (HerBestSelfSociety.com) ✅ Set clear boundaries with exercise - Temporarily pause formalized exercise, focus on gentle movement (This requires support—I couldn't do this alone) ✅ Document thoughts & feelings - Not to be perfect, but to increase awareness of patterns and triggers. Rebuild trust with body and mind. Structure = support. Not rigidity, but safety. STEP 4: INVESTIGATING TRIGGERS Eating disorders aren't just about food or weight. What's really happening beneath the surface? Questions I'd ask myself: ❓ What changes in my life have happened recently? (Transition, loss, increased responsibility, relationship change) ❓ What emotions am I struggling to manage? ❓ What am I trying to numb, distract from, or control? ❓ What needs aren't being met right now? ❓ What external pressures am I responding to? ❓ What beliefs am I believing about my worth, body, or identity? The truth: Eating disorders flare during periods of change and loss of control. Understanding triggers helps you heal beyond just the behaviors—you learn to process emotions in healthier ways. STEP 5: DEVELOPING A CRISIS RESPONSE PLAN Lasting recovery requires more than just putting out fires. What I'd create: ✅ Coping strategies - Tools to use when urges arise ✅ Relapse prevention plan - Document early warning signs, high-risk situations, actions to take ✅ Support system - Who to call, when, and why The sustainable plan is about building a life where: The eating disorder becomes less necessary and less powerful Recovery feels like moving TOWARD something meaningful Not just running away from illness Work with someone to determine exactly what support you need and put that planning in place. STEP 6: RECONNECTING WITH YOUR WHY The most important step: Remember what the eating disorder is stealing from you. What I'd do: ✅ Identify the values my ED violates The ED promises control, safety, worth. But it actually undermines: freedom, joy, creativity, authenticity, relationships, purpose. ✅ Compile a list: What has this ED taken from me? Holidays ruined Relationships lost Moments with loved ones missed Energy wasted Dreams on hold Future opportunities destroyed ✅ Ask: What present moments is it stealing RIGHT NOW? What future opportunities will be destroyed if I don't fix this? ✅ Dream beyond the disorder - What do I want my life to look like? Who is my BEST self? If I could go back 15+ years and tell my younger self: "You're gonna go through this godawful period, but on the other side is MAGICAL. You'll experience things you never would've allowed—wonderful relationships, contributions to the world, PURPOSE. Start dreaming NOW of the vision beyond this disorder." KEY QUOTES FROM THIS EPISODE

Becoming the Channel with Robyn McKay
The Conversations Patients Really Want From Their Providers

Becoming the Channel with Robyn McKay

Play Episode Listen Later Feb 10, 2026 8:27


In this powerful episode, Dr. Robyn McKay goes deeper into the conversations patients need to be having with their intuitive clinicians and healthcare providers. She explores why intuitive medicine is shaping the future of healthcare, why intuition matters for every leader, and how it's redefining healing and caregiving.This episode explores:Why being an intuitive should be honoredHow Western healing is converging with intuitionWhy intuition is going to be the major leadership abilityWhy intuition is a respected and valued assetWhat patients actually want from practitionersHow tools are dependent on our nervous systemHow to use intuition for deeper discernmentThe need for an entirely holistic way of healingWhy we should start feeling safe in our own skinIf you're an intuitively intelligent physician or clinician, this is your moment to honor your gifts and consciously integrate intuition into the way you practice and lead.Love what you're hearing?Leave a review on Apple Podcasts and send a screenshot to Robyn. Each month, one listener will receive a Scroll of Recognition—a custom energetic blessing, activation, or intuitive message written just for you.Robyn McKay, PhD, is an award-winning therapist and psychospiritual advisor who teaches and leads at the intersection of psychology × spirituality × energetics. With deep roots in clinical psychology and a lifetime of living at the crossroads of intuition and credentials, she is a rare bridge between science and soul, credentials and codes, strategy and spirit.Early in her career, Robyn served as a university psychologist before stepping into her broader calling as a guide for high performers, creatives, and seekers. She addresses a wide spectrum of human experience — healing trauma, anxiety, depression, mood disorders, and ADHD in women; accessing spiritual gifts; and navigating existential crossroads.Having sold $2.5M+ in retreats and private intensives, Robyn is now architecting an entirely new category of retreats: expert-led, trauma-informed, miracle-level. She helps credentialed, neurodivergent, and spiritually awake women leaders design transformational retreats that carry depth, meaning, and lasting impact.Connect with Dr. Robyn McKay:LinkedIn: Robyn McKay, PhDFacebook: Dr. Robyn McKayInstagram: @robynmckayphd Book a call with Dr. Robyn! https://drrobynmckay.com/call Join the $100K Retreat Leaders Secrets: https://www.facebook.com/groups/100kretreatsecrects 

Amoda Maa Podcast
Episode 61: The Embodiment of the Light of Awakening in Ordinary Life

Amoda Maa Podcast

Play Episode Listen Later Feb 10, 2026 18:12


In this talk, Amoda Maa invites us turn from the recognition of the light of being to its embodiment — the lived expression of awakening in the midst of ordinary life. Recognition shows us that our true nature is awareness. Embodiment asks: How do I live this in the chaos, beauty, and unpredictability of being human? Embodiment is an ongoing journey — not a final state. It is the gentle returning, again and again, to the silent radiance that is always here. Even in the darkest moments, when we meet life with openness rather than resistance, the light reveals itself. Taken from a Weekend Online Retreat titled "The Illumination of God's Being" held in November 2025.

The MFCEO Project
998. Q&AF: Waiting For Recognition, Gaining Control In Business & Stuck In Life

The MFCEO Project

Play Episode Listen Later Feb 9, 2026 45:43


On today's episode, Andy answers your questions on how big your success needs to before getting appreciated by others, how to stay in charge of your business without becoming trapped by it, and how to break out of "feeling stuck" when the path ahead feels unclear and risky.

Meditative Prayers by Pray.com
Financial Peace - Recognition | Dr. Tim Clinton

Meditative Prayers by Pray.com

Play Episode Listen Later Feb 9, 2026 6:51 Transcription Available


In this captivating episode of the Meditative Prayers podcast, hosted by the insightful Dr. Tim Clinton and accessible on Pray.com, we delve into the profound theme of recognizing our spiritual journey—an endeavor that deeply resonates within our Christian community. Throughout our spiritual odyssey, there are moments when acknowledging our progress and experiencing personal recognition becomes a significant longing. These moments not only deepen our faith but also enrich our connections, propelling us toward our individual aspirations. The comforting truth remains steadfast: with the Lord as our unwavering guide, we possess the innate capacity to recognize these aspirations, uncovering renewed hope and purpose in our path. Drawing profound inspiration from sacred scriptures, we embark on an exploration of this transformative human experience. For those seeking guidance in recognizing their spiritual journey along their path of faith, we extend a heartfelt invitation to explore the Pray.com app. By simply downloading it today, you can embark on a transformative journey of faith and resilience, firmly grounded in the steadfast presence of the Divine. Together, let us wholeheartedly embrace the remarkable potential for recognition within us, finding limitless inspiration and strength during our shared spiritual voyage. We invite you to join us in this enlightening episode as we journey toward a profound understanding of recognizing our spiritual aspirations and discovering the extraordinary sense of recognition that resides within each of us. Embracing the practice of praying before slumber is more than just a routine; it's an avenue to recenter your heart, aligning it with God's purpose. Let Pray.com's Meditative Prayer be a nightly companion, deepening your bond with the Almighty and settling your spirit for a serene night's rest.Dr. Tim Clinton is from the American Association of Christian Counselors, for more information please visit: https://aacc.net/See omnystudio.com/listener for privacy information.

The Knight Report Podcast
Rutgers Football Adds Drake HC Joe Woodley & DC Adam Cox to Coaching Staff!

The Knight Report Podcast

Play Episode Listen Later Feb 9, 2026 81:56


In this episode of the The Knight Report podcast, hosts Mike Broadbent, Richie O'Leary, and Alec Crouthamel discuss Rutgers Football hiring Drake HC Joe Woodley and Drake DC Adam Cox for defensive coach roles. They discuss the resumes of both and how they fit into Travis Johansen's defense. They close by diving deeper into the Johansen DC hire and answering some listener questions. 00:00 Introduction and Coaching Changes 04:52 Seattle Seahawks Super Bowl Victory 09:55 Joe Woodley: A New Addition to the Coaching Staff 14:46 Adam Cox: The New Safety's Coach 19:56 Recruiting Strategies and Coaching Experience 24:55 Final Thoughts on Coaching Hires 31:49 Final Transfers and Roster Outlook 33:42 Key Players to Watch 41:52 Coaching Philosophy and Player Development 45:42 Adapting Strategies to Personnel 49:23 Building Relationships with Players 55:51 Coaching Salaries and Market Rates 01:02:24 Trust in Coaching Staff 01:04:08 Athletic Achievements and Recognition 01:06:09 Financial Insights and Athletic Deficits 01:12:05 Future Prospects and Recruitment Strategies 01:16:44 Closing Thoughts and Future Outlook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Selling From the Heart Podcast
Lead Like a Person, Not a Position featuring Mark Carpenter

Selling From the Heart Podcast

Play Episode Listen Later Feb 7, 2026 30:06


Mark Carpenter is a keynote speaker, leadership coach, and bestselling author dedicated to reshaping business leadership with a human-first approach. With experience across multiple industries, Mark helps organizations improve productivity, engagement, and commitment by fostering authentic connections.He is the author of Lead Like a Person, Not a Position and co-author of Master Storytelling: How to Turn Your Stories Into Experiences that Teach, Lead, and Inspire. Mark's work blends heart and strategy, equipping leaders to move beyond titles and authority to build trust, unlock commitment, and create cultures where people truly thrive.SHOW SUMMARYIn this episode of the Selling from the Heart Podcast, Larry Levine and Darrell Amy are joined by Mark Carpenter to explore what it truly means to lead and sell like a person, not a position. Mark challenges traditional leadership models that rely on authority and hierarchy, emphasizing instead the power of authenticity, presence, and intentional connection.Drawing from his bestselling book, Mark outlines three essential leadership skills—listening intently, communicating intentionally, and recognizing individuals, that directly impact trust, performance, and engagement. The conversation also addresses why top-performing salespeople often struggle when promoted into leadership roles without people-skills training and why mentorship is critical for developing effective leaders. This episode offers practical, human-centered guidance for anyone looking to lead and sell from the heart.KEY TAKEAWAYSLeadership and sales success begin with human connection—not titles or authority.Lead and sell as a person first; positions don't build trust, people do.The three core leadership skills: listening intently, communicating intentionally, and recognizing individuals.Many leaders fail because they receive position training but not people-skills training.Being a great salesperson does not automatically translate into being a great leader.Recognition is just as important as correction, and often overlooked.Listening requires discipline and presence in a fast-thinking world.Mentorship accelerates leadership readiness and long-term effectiveness.HIGHLIGHT QUOTESWe do not rise to the level of our expectations. We fall to the level of our training.People can speak at about 125 words per minute, listen at 400, and think at 900, presence is work.What's the best thing about your work? The people. What's the worst thing? The people.People are messy… and there's joy in the messiness too.

Amazing Teams Podcast
Taco Bytes: 10 Lessons for 10 Years of Building HeyTaco

Amazing Teams Podcast

Play Episode Listen Later Feb 6, 2026 20:43


Send us a textTo celebrate HeyTaco's 10th birthday, Una sits down with HeyTaco founder Doug Dosberg for a heartfelt and hilarious trip down memory lane. In this special Taco Bytes episode, Doug shares 10 powerful lessons from a decade of building a gratitude-powered business—complete with taco costumes, customer stories, and Beyoncé GIFs.If you're a startup founder, team builder, or taco enthusiast, this one's for you.Highlights from the EpisodeThe Big 1-0: Doug reflects on what it feels like to hit the 10-year mark and why it doesn't feel like it's been that long.Behind the List: How he went from “I don't have any lessons” to “I have too many lessons.”Lesson Threads: The top 10 takeaways cover everything from building a bootstrapped SaaS to designing human-centered recognition.Doug's 10 Lessons for 10 YearsBuilding a business is hard — It takes time, patience, and the courage not to quit when things get tough.Don't do it for the money — Solving a real problem beats chasing revenue any day.Work on a problem you actually feel — Doug built HeyTaco because he knew what it felt like to do meaningful work and feel invisible.Your customers are the best investors you'll ever have — He bootstrapped HeyTaco from day one, and never took outside funding.Tools don't create culture. People do. — A moving story about a team using tacos to lift up a teammate shows that culture lives in moments, not mechanics.Customer service is the product — Doug still checks the support inbox. Listening to users fuels innovation.Recognition works best when it's imperfect — Typos, emojis, and awkward phrasing are features, not bugs.Recognition is a leading indicator — Tacos often spike before major milestones (like a little ChatGPT launch

Deck of Many Aces
Splitting the Deck - THIS is Slugblasting

Deck of Many Aces

Play Episode Listen Later Feb 6, 2026 56:17


Toby, the champion of the solo skate tournament, is gearing up to be the hottest new slugblaster (and annoy Krystal) with a sponsorship deal from Miper. - but Conan's discovered all is not as it seems. Can he save slugblasting before its too late? Not to mention, the team still need to compete in The Gauntlet! Krystal checks their phone, no matter what. Nia learns about safety. Toby reveals a secret. Conan memes desperately.This one shot uses the Slugblaster system by Mikey Hamm and published by Mythworks.Find our special guest Shamini as one of the RPGeeks on Youtube.Today's promo: The One Page TTRPG Bundle (Volume 2) from La Lionne Publishing.Music by Chloe Elliott: Not A CrimeJolted AwakeAlive and/or DeadA World of Many ColoursArtwork by Eiriol Evans.Sound effects from Zapsplat.Join our Discord server here for free!Support us by becoming a patron on Patreon.Check out the Deck of Many Aces original soundtrack on music streaming services like Spotify.Other projects:Apply to be part of TTRPGirls and play in games run by Am, Chloe and other fab women and femmes here.Listen to Am and Chloe on RWD. You can find them on Twitter and Instagram @RWD_Pod.Listen to Chloe voice Quinn/ Cynthia in C4DAC3U5.Listen to Chloe voice Eadith in Legend of the Bones.Find out what Ellie's up to at elliewebster.co.uk and sign up to their mailing list here to keep updated on all their creative projects.Asexuality and Aromantic Resources:The Asexual Visibility and Education NetworkThe Aromantic-spectrum Union for Recognition, Education, and AdvocacyDeck of Many Aces is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards of the Coast. Portions of the materials used are property of Wizards of the Coast. ©Wizards of the Coast LLC. All the characters in this podcast are fictitious, and any resemblance to actual persons, living or dead, is purely coincidental.Support this show http://supporter.acast.com/deck-of-many-aces. Hosted on Acast. See acast.com/privacy for more information.

Brock and Salk
Hour 4 - More On Nick Emmanwori, Kyle Brandt, Still No National Recognition For The Seahawks

Brock and Salk

Play Episode Listen Later Feb 5, 2026 41:43


Brock and Salk continue to talk through the Nick Emmanwori news and the potential severity and implications of it for Sunday. They also have Kyle Brandt of Good Morning Football join the show to discuss the Seahawks Super Bowl chances, Sam Darnold, what the national headlines are around the team and much more. They then react to the interview and are still fascinated that the Seahawks stars are talked about so little outside of Sam Darnold.

SicEm365 Radio
Why Darren Woodson's Cowboys Legacy Deserves Hall of Fame Recognition

SicEm365 Radio

Play Episode Listen Later Feb 5, 2026 23:45


Former Darren Woodson joins the show for a wide-ranging conversation on his Hall of Fame case, what versatility really meant in the Cowboys dynasty defenses, and why winning championships always mattered more than individual stats. Woodson reflects on playing multiple positions, the culture set by Jimmy Johnson's teams, the grind of elite preparation, and his ongoing passion for seeing the Cowboys get back to the standard he helped build. #nfl #nflhalloffame #canton #dallascowboys #superbowl Learn more about your ad choices. Visit megaphone.fm/adchoices

Chapel – Southern Equip
Wisdom is Greater Than Recognition

Chapel – Southern Equip

Play Episode Listen Later Feb 5, 2026


The post Wisdom is Greater Than Recognition appeared first on Southern Equip.

Walking With Dante
Dante Faints For The Third Time In COMEDY: PURGATORIO, Canto XXXI, Lines 64 - 90

Walking With Dante

Play Episode Listen Later Feb 4, 2026 35:26


Beatrice has finished her case against the pilgim Dante. All that's left is for him to find his way beyond confession and into confession . . . which he does with a major crack-up that leads him to faint for the third time in COMEDY.Before he collapses, the poem begins a series of inversions or reversals that both increase the ironic valences of the passage and give its reader an almost vertigo-inducing sense of Dante's emotional landscape.A difficult passage in the Garden of Eden, here Beatrice accomplishes what she came for. Join me, Mark Scarbrough, as we explore the slow build-up to the final moment of contrition . . . which mimics the moment when Dante gives way in front of Francesca, back in INFERNO's circle of lust.Here are the segments for this episode of WALKING WITH DANTE:[01:20] My English translation of PURGATORIO, Canto XXXI, Lines 64 - 90. If you'd like to read along or continue the conversation with me, please find the entry for this episode on my website, markscarbrough.com.[04:15] Dante, from boy to man.[07:26] Recognition, the key to the passage, to contrition, and a possible node of irony.[10:38] The "unbearded" oak and the final crack-up.[13:49] Iarbas and Dido v. Dante and the new Dido.[16:28] Beatrice's venom.[17:27] Dante's beard.[20:00] The angels' departure?[21:16] The meaning of the beast's two natures.[23:53] Glossing the end of the passage: lines 82 - 90.[27:57] Francesca and her physical seduction v. Beatrice and her physical-theological seduction.[33:01] Rereading the passage: PURGATORIO, Canto XXXI, lines 64 - 90.

LTC University Podcast
Winning Teams Don't Just Communicate—They Connect

LTC University Podcast

Play Episode Listen Later Feb 3, 2026 36:47


In this episode, Jamie sits down with Colin Stevens to talk about the difference between communicating and actually connecting. They unpack why teams can look successful on the outside but be disconnected on the inside, how adversity reveals character, and why connection always carries risk. You'll also learn the two types of respect, the quiet trust-killers that damage teams over time, and the three controllables—effort, attitude, and energy—that determine whether connection grows or dies. www.YourHealth.Org

McNeil & Parkins Show
Frank Thomas irked by lack of recognition in White Sox's Black History Month post

McNeil & Parkins Show

Play Episode Listen Later Feb 2, 2026 12:42


Matt Spiegel and Laurence Holmes discussed a social media dispute between White Sox legend Frank Thomas and the organization.

Ask Me How I Know: Multifamily Investor Stories of Struggle to Success
#269 When Parenting Pressure Feels Heavier Than It Should

Ask Me How I Know: Multifamily Investor Stories of Struggle to Success

Play Episode Listen Later Feb 2, 2026 9:12


Parenting pressure can linger even when life feels stable. This episode explores why subtle tension isn't failure, but information — and how awareness creates safety when identity-level misalignment has quietly replaced presence.Parenting pressure doesn't always arrive during crisis.Often, it shows up after things have settled — when the hard season has passed, routines are working, and life looks “fine” from the outside. And yet, something feels tighter than it needs to be.In this Monday episode of The Recalibration, Julie Holly introduces the Recognition stage of identity-level recalibration through the lens of parenting — not as a strategy to improve, but as a relational environment where pressure and presence quietly shape everything.This conversation is for high-capacity humans who are still showing up, still caring deeply, and still holding responsibility — but noticing that it costs more than it used to.In this episode, you'll explore:Why parenting tension often appears after survival mode endsHow subtle tightness is a form of awareness, not failureWhat the Recognition stage actually is — and why it always comes firstHow pressure quietly replaces presence without us realizing itWhy noticing does not obligate action or decision-makingHow nervous system safety is created through permission, not urgencyThe difference between being less capable and being less overextendedDrawing from nervous system wisdom, psychology, and lived experience, Julie reframes “feeling stuck” not as a lack of insight, but as a learned reflex to act too quickly on awareness — a pattern that keeps the system braced and prevents integration.This is not mindset work.It's not productivity coaching.And it's not another parenting approach.Identity-Level Recalibration (ILR) works at the root — creating the conditions where awareness is safe, pressure releases, and presence returns naturally.This episode is about orientation, not resolution.Recognition before release.Companionship instead of correction.Today's Micro Recalibration:Complete this sentence, without analysis or fixing:“One place parenting feels tighter than it needs to be is…”Awareness is enough for today.Explore Identity-Level Recalibration→ Join the next Friday Recalibration Live experience → Take your listening deeper! Subscribe to The Weekly Recalibration Companion to receive reflections and extensions to each week's podcast episodes. → Follow Julie Holly on LinkedIn for more recalibration insights → Schedule a conversation with Julie to see if The Recalibration is a fit for you → Download the Misalignment Audit → Subscribe to the weekly newsletter → Books to read (Tidy categories on Amazon- I've read/listened to each recommended title.) → One link to all things

The Midday Show
Hour 1 – Jalen Johnson getting deserved national recognition

The Midday Show

Play Episode Listen Later Feb 2, 2026 40:06


In Hour 1, Andy and Randy talk about the start of Super Bowl week, Jalen Johnson being selected for the All Star Game, the bad loss to the Pacers, Michael Penix's latest comments on the offense he was asked to run, and the Falcons adding even more accomplished names to their offensive coaching staff.

The Blackprint with Detavio Samuels
Redefining Legacy Beyond Fame and Recognition

The Blackprint with Detavio Samuels

Play Episode Listen Later Feb 2, 2026 12:25


On this episode of The Blackprint, we're pulling together standout moments from past conversations that challenge traditional ideas of legacy. We're reflecting on purpose, leadership, and long-term impact, from investing in people to building systems that outlast any single individual. Featured on this episode are Everette Taylor, Dre London, Lethal Shooter, Lynae Vanee, and Tunde Balogun. Follow host Detavio Samuels on Instagram at @Detavio. Liked this episode? Give us a rating and a review, we'd love to hear your thoughts.

fame redefining recognition legacy beyond everette taylor lethal shooter
The JVY Podcast
Twice The Work, Still Waiting On Recognition

The JVY Podcast

Play Episode Listen Later Feb 2, 2026 36:24


What happens when effort doesn't equal recognition?In this episode, I unpack the discourse around “Sinners” being denied proper acknowledgment—and why that conversation resonates so deeply within the Black American experience. From cultural labor to emotional discipline, from faith to fatigue, this is a reflection on what it means to work harder for less visibility, fewer rewards, and delayed validation.This isn't a rant.It's an honest examination of excellence, patience, and the tension between faith and fairness.If you've ever felt overlooked despite doing the work, this episode is for you.FOLLOW ME ON SOCIAL MEDIA:https://linktr.ee/jvlilINSTAGRAM: _jvlil TIKTOK: jvy__TWITTER: _jvlil

Pregnancy Help Podcast
Legal Recognition of the Unborn – Joseph Pardo

Pregnancy Help Podcast

Play Episode Listen Later Feb 2, 2026 16:48


Heartbeat’s International Specialist Ellen Foell interviews Joseph Pardo from Hope Women’s Clinic in Puerto Rico and Heartbeat’s Director of Government Relations Jessica Prol Smith to discuss the recent passage of legislation in Puerto Rico recognizing unborn children as legal persons from the moment of conception. Learn about what this means on a legal and cultural level, and what we can do as the pregnancy help movement as we see shifts in the legal landscape. Click here for more on our upcoming Annual Conference! https://www.heartbeatservices.org/conference Heartbeat International provides a forum to express a marketplace of ideas for an audience of life-affirming pregnancy help organizations and those who support such organizations.  The ideas, views and opinions expressed in this presentation are those of the presenter and may or may not reflect advice, opinions, policies or views of Heartbeat International, Inc. Presenters come from a wide range of experiences and backgrounds, inside and outside of the Pregnancy Help Movement. We encourage listeners or viewers to do their own additional research and discern for themselves how to apply the materials presented. Share Post Share

The Green Insider Powered by eRENEWABLE
Breaking Down OT Cybersecurity: Highlights from UTSI's Six‑Part Series

The Green Insider Powered by eRENEWABLE

Play Episode Listen Later Jan 30, 2026 14:58


This Follower Friday on The Green Insider spotlights the powerhouse UTSI podcast series and the cutting‑edge conversations shaping the future of OT. Mike Nemer and Shaun Six break down the latest in OT innovation, AI, security, and energy efficiency, while showcasing standout partners like Sequre Quantum, Siemens, BlastWave, and EdgeRealm. It's a dynamic deep dive into why OT cybersecurity is becoming mission‑critical for today's infrastructure leaders — and how collaboration, education, and next‑gen technology are driving the industry forward. UTSI Podcast Series Conclusion Final episode of a six‑part podcast series sponsored by UTSI International. Features reflections from Mike Nemer and Shaun Six (CEO, UTSI International) on relationships built during the series. Emphasis on OT cybersecurity as a core theme. Emergent insight: AI's environmental impact surfaced as an unintended but compelling storyline. Episode structure includes a brief series recap, a short CEO segment (8–10 minutes), and post‑production editing support. Critical Infrastructure Security Challenges UTSI's 40‑year history supporting critical infrastructure is underscored. Industry challenges highlighted: Talent shortage (≈5 engineers leaving for every 1 entering). Rapid increase in connectivity of critical infrastructure devices. AI positioned as a force multiplier for operators—but also a potential attack vector if data is exposed. Partnerships discussed: Sequre Quantum – quantum random number generators. BlastWave – insights into AI's dual role as defender and risk. Focus on showcasing technologies that secure operations and protect infrastructure from emerging threats. AI Data Center Energy Solutions Collaboration with Siemens (via Alyssa) on AI's impact on data centers. Key concerns: rising energy and water consumption driven by AI workloads. Edge Realm highlighted for improving energy density at the edge to reduce strain. Introduction of LeakGeek, a rapid leak detection and response tool. Work with EdgeRealm also addresses illegal hot tapping and oil theft, noted as more common than publicly acknowledged. OT Cybersecurity: Collaboration and Education Strong focus on securing operational technology (OT) and industrial control systems. Call for improved private–public collaboration and information sharing. Many cyberattacks go unreported to avoid reputational damage. Attack vectors increasingly include everyday devices (e.g., printers, fax machines). Ransomware incidents can cost organizations millions of dollars per day. Emphasis on educating boards and investors about OT cybersecurity risks and value. UTSI OT Cybersecurity Partnership UTSI's approach includes: Cloaking OT systems. Securing remote access. Improving visibility and auditability of networks. Recognition of sponsorship and education value of a six‑part cybersecurity series. Closing remarks focused on partnership, knowledge sharing, and raising cybersecurity awareness. A special shout out the guest in this UTSI podcast series, Paulina Assmann, Alissa Nixon, Tom Sego, Frank Stepic, and Robert Hilliker. To be an Insider Please subscribe to The Green Insider powered by ERENEWABLE wherever you get your podcast from and remember to leave us a five-star rating. This podcast is sponsored by UTSI International. To learn more about our sponsor or ask about being a sponsor, contact ERENEWABLE and the Green Insider Podcast. The post Breaking Down OT Cybersecurity: Highlights from UTSI's Six‑Part Series appeared first on eRENEWABLE.

Becoming the Channel with Robyn McKay
Holding Space the Healthy Way: The Energetic Boundaries Empaths Must Learn

Becoming the Channel with Robyn McKay

Play Episode Listen Later Jan 30, 2026 14:36


Get access to The Energy Shield Protocol: https://view.flodesk.com/pages/64aa986317cf480e0b5faa2eIn today's episode, we're diving into Psychotherapy 101 and exploring what it really means to hold space in a healthy way, especially for empaths, and how to create strong internal and energetic boundaries.This episode explores:The physical, energetic, and emotional aspects of holding spaceWhy empaths naturally absorb the energy around themHow to set clear boundaries as an empathWhy absorbing someone else's emotions is not actually helpfulHow and why to create your own energy shieldHow masterful healers move energy without taking it onHonoring emotions while allowing others to fully feel theirsWhat true containment feels like in the bodyWhy intuition works best when it's educated and groundedHow intuitive clinicians bring a deeper level of healingWherever you feel most capable and confident, those are the internal states you want to activate. This is how containment is built and how boundaries become something you feel, not just something you set.Love what you're hearing?Leave a review on Apple Podcasts and send a screenshot to Robyn. Each month, one listener will receive a Scroll of Recognition—a custom energetic blessing, activation, or intuitive message written just for you.Robyn McKay, PhD, is an award-winning therapist and psychospiritual advisor who teaches and leads at the intersection of psychology × spirituality × energetics. With deep roots in clinical psychology and a lifetime of living at the crossroads of intuition and credentials, she is a rare bridge between science and soul, credentials and codes, strategy and spirit.Early in her career, Robyn served as a university psychologist before stepping into her broader calling as a guide for high performers, creatives, and seekers. She addresses a wide spectrum of human experience — healing trauma, anxiety, depression, mood disorders, and ADHD in women; accessing spiritual gifts; and navigating existential crossroads.Having sold $2.5M+ in retreats and private intensives, Robyn is now architecting an entirely new category of retreats: expert-led, trauma-informed, miracle-level. She helps credentialed, neurodivergent, and spiritually awake women leaders design transformational retreats that carry depth, meaning, and lasting impact.Connect with Dr. Robyn McKay:LinkedIn: Robyn McKay, PhDFacebook: Dr. Robyn McKayInstagram: @robynmckayphd Book a call with Dr. Robyn! https://drrobynmckay.com/call Join the $100K Retreat Leaders Secrets: https://www.facebook.com/groups/100kretreatsecrects 

DTC Podcast
Ep 581: Meta Ads Aren't About Targeting Anymore: How $5–50M Brands Win with Intent-Based Creative

DTC Podcast

Play Episode Listen Later Jan 30, 2026 29:34


Subscribe to DTC Newsletter - https://dtcnews.link/signupDaniel Sendecki, VP of Brand + Performance at Pilothouse, joins us to explain why the old model of “persuasive creative” is dead—and what's replacing it. In a post-Andromeda Meta world, the brands winning on paid social aren't out-designing or out-targeting anyone. They're out-resolving.Role-Based Hook:For DTC teams building creative that compounds—across Meta, search, and the full funnel.Inside the episode:Why creative isn't about persuasion anymore—it's about resolutionHow Pilothouse mines Google queries to map real customer frictionWhat Meta's Andromeda update means for targeting and creative structureWhy intent clusters matter more than personasHow to build a living creative library that scalesWho this is for:DTC founders, brand leads, creative strategists, and media buyers navigating post-iOS MetaWhat to steal:Group customer questions into “psychological intent clusters”Use creative to answer, not pitchBuild ad libraries like a searchable index of problems solvedTimestamps:00:00 Creative is shifting from persuasion to resolution02:05 Why interest-based targeting no longer works04:55 Creative as answers to customer questions07:05 Meta evolving into an intent-driven platform09:15 Psychological intent clusters explained12:10 Why idea variation matters more than asset tweaks14:10 Funnel congruency in the Andromeda era17:05 Creative as an operating system, not an output19:40 Brand storytelling inside modern performance systems21:10 Recognition hooks vs interruptive hooks24:10 Blending brand and performance through intent26:15 Avoiding AI-sounding creative outputsHashtags:#DTCMarketing #CreativeStrategy #MetaAds #PerformanceCreative #BrandStrategy #MarketingPodcast #PaidSocial #CreativeTesting #Andromeda #IntentMarketing #DigitalAdvertising #EcommerceMarketing #Pilothouse Subscribe to DTC Newsletter - https://dtcnews.link/signupAdvertise on DTC - https://dtcnews.link/advertiseWork with Pilothouse - https://www.pilothouse.co/?utm_source=AKNF581Follow us on Instagram & Twitter - @dtcnewsletterWatch this interview on YouTube - https://dtcnews.link/video

Syndication Made Easy with Vinney (Smile) Chopra
Why Marketing Culture Makes or Breaks Real Estate | Abundance Mindset

Syndication Made Easy with Vinney (Smile) Chopra

Play Episode Listen Later Jan 29, 2026 32:18


In this episode of The Abundance Mindset, hosts Vinney Chopra and Gualter Amarelo break down a topic most investors overlook — how culture inside your sales and marketing teams directly impacts occupancy, cash flow, and long-term success. Vinney shares lessons from building and operating thousands of units across multifamily, senior living, and hospitality, while Gualter brings real-world challenges from actively scaling his own portfolio.   This conversation dives deep into what actually drives performance on the ground:

Florida Matters
Charters and closures, both sides of the tax, glacial recognition, Tampa's gold star

Florida Matters

Play Episode Listen Later Jan 29, 2026 48:21


Pinellas County is making big moves to close and merge under-enrolled campuses as charter schools seek space in public schools through the state's Schools of Hope program.Families and educators are weighing the benefits of expanded school choice against the disruption caused by closures and consolidations. We start the show by unpacking what's at stake.Also in the news: property taxes. Should lawmakers reduce or phase them out? A pair of lawmakers take calls from listeners and explore the trade-offs between tax relief and maintaining quality of life.And what's going on with this chilly weather? And what's with all this talk about snow in the Tampa Bay area this weekend. We turn to someone who knows.Finally, Tampa will soon shine on the global stage as native sled hockey player Declan Farmer heads to Milan with his sights set on a fourth Winter Paralympics gold medal.

Silo: A BingetownTV Podcast
NEWS: Big Updates on Silo Seasons 3 and 4; New Greenlit Films; Award Recognition; plus so much more!

Silo: A BingetownTV Podcast

Play Episode Listen Later Jan 28, 2026 13:52


Today we have some pretty big updates on Silo Seasons 3 and 4, we're talking F1's Oscar Nomination, and of course we're discussing plenty of other news in the Apple Universe! More BingetownTV Content!  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Check Out Our Podcast on Youtube! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Check Out Our Youtube Entertainment Channel! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Join the BingetownTV Community Discord (FREE)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow us on Socials!  Instagram- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/bingetowntv/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter/X - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/bingetowntvpod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ TikTok- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.tiktok.com/@bingetowntv?_t=8gdE279ReTm&_r=1⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

The Partial Credit Podcast
Nerdy Nick at Night - PC113

The Partial Credit Podcast

Play Episode Listen Later Jan 28, 2026 68:20


Keywords education, technology, podcast, FETC, animal discussions, gambling, teaching, conferences, EdTech, snow days, education, innovation, technology, Classroom Draft, EdTech, AI, community, podcast, teaching, learning Takeaways The conversation starts with a light-hearted introduction amidst a snowstorm. Discussion about a fictional teacher gambling app emerges humorously. FETC conference experiences are shared, highlighting the camaraderie among educators. A theoretical discussion about the largest animal one could choke out leads to humorous exchanges. The conversation transitions into a serious discussion about an EdTech tournament bracket. Participants reflect on their roles in education and how they would rank against each other in a tournament setting. The group discusses the importance of recognizing contributions from educators in various fields. Humor is a consistent theme throughout the conversation, making serious topics more engaging. Theoretical discussions about animals lead to unexpected insights about human capabilities. The podcast showcases the blend of humor and serious educational discourse. Ranking educators can be subjective and varies by category. Emotional connections in education can influence innovation. The Classroom Draft app engages students in learning. The first EdTech draft was a fun and competitive experience. Collaboration among educators is essential for community building. AI is becoming a buzzword in the education sector. Recognition of teachers is crucial for their motivation. Innovative approaches can disrupt traditional educational methods. Community managers in education often know each other. Humor and camaraderie are important in educational discussions. Summary In this episode, the hosts engage in a light-hearted conversation that transitions into various themes, including humorous discussions about teacher gambling, experiences at the FETC conference, and a theoretical debate about the largest animal one could choke out. The conversation culminates in a creative EdTech tournament bracket discussion, where the hosts rank themselves and their peers in a playful yet insightful manner. In this engaging conversation, the hosts discuss various themes related to innovation in education, including personal rankings of educators, the emotional aspects of educational innovation, and the introduction of a new app called Classroom Draft. They also reflect on their experiences at the first EdTech draft and Nick's new role at School AI, while humorously exploring the dark side of sports wishes. Titles Snowstorms and Teacher Gambling: A Lighthearted Start FETC Insights: Educators Unite Sound bites "You could bet on anything!" "We love you guys." "Thank you." Chapters 00:00 Introduction and Conference Vibes 01:26 Teacher Gambling and Snow Day Predictions 02:59 FETC Conference Highlights and Donnie's Speaking Experience 09:58 Theoretical Animal Combat Discussion 14:54 ChatGPT and Animal Size Debate 15:30 The Great Animal Debate 18:46 Wrestling and Unexpected Connections 24:37 EdTech Tournament of Champions 32:23 Ranking the Innovators 35:49 The Emotional Battle of Innovation 38:41 Donnie's AI and the NIT Bracket 43:23 Introducing Classroom Draft 49:38 The EdTech Draft Results 51:18 The Draft Debate: Tools and Choices 54:16 New Roles and Responsibilities in Education 57:38 Community Building and Collaboration 01:01:09 Sports Rivalries and Dark Humor 01:03:51 The AI Trend in Education 01:07:48 Closing Thoughts and Future Connections

Be The Husband She Brags About
3: From Conflict to Connection: The Power of Recognition in Marriage

Be The Husband She Brags About

Play Episode Listen Later Jan 28, 2026 49:00


In this episode, Mark and Matilde explore the complexities of giving and receiving feedback in relationships, emphasizing the importance of recognition in making feedback a positive experience. They share personal anecdotes and insights on how fear and past experiences can hinder effective communication. The conversation highlights practical strategies for incorporating recognition into daily interactions, ultimately fostering a healthier and more supportive partnership.Chapters00:00 The Challenge of Giving Feedback in Relationships03:53 Personal Experiences with Feedback10:03 The Fear of Feedback: A Shared Struggle16:06 The Importance of Recognition in Feedback21:55 Balancing Feedback and Recognition30:01 Practical Strategies for Giving Feedback46:10 Conclusion: The Power of Recognition

TrueLife
Flatland - Your Future Self, Reaching Back to Edit Your Present

TrueLife

Play Episode Listen Later Jan 28, 2026 15:16


Support the show:https://www.paypal.me/Truelifepodcast?locale.x=en_USOne on One Video Call W/George https://tidycal.com/georgepmonty/60-minute-meeting-----**CONTENT WARNING: This episode contains embedded hypnotic suggestions, temporal displacement, reality destabilization protocols, and recruitment into a dimensional war you didn't know you were fighting. Do not operate heavy machinery while listening. Do not listen if you prefer your reality solid and unchanging. Do not expect comfort.**-----## The Sphere didn't just appear in 1884. It's appearing RIGHT NOW. In your life. In this moment.You just keep forgetting.**Because Flatland has a forgetting mechanism.**Every time you see a glitch in reality.Every time you perceive something the 2D world says doesn't exist.Every time the Sphere lifts you out and shows you other dimensions…**The system makes you forget.**Makes you “be realistic.”Makes you “get back to normal.”Makes you rebuild your 2D identity as fast as possible.**Because if you STAYED in the vertical dimension… you'd see the prison bars.****And prisoners who see the bars become insurgents.**-----## This episode is not information. It is initiation.Three techniques are being deployed simultaneously:**1. HYPNOTIC INDUCTION**- Erickson-style confusion patterns- Embedded commands in natural speech flow- Post-hypnotic suggestions planted for activation 3 days from now- Subliminal audio layers at -26dB (below conscious threshold)**2. RAS (RETICULAR ACTIVATING SYSTEM) ACTIVATION**- Your perception filter is being reprogrammed- After this episode, you'll start seeing Sphere moments EVERYWHERE- Glitches you ignored before will become LOUD- Synchronicities will multiply (or you'll finally notice them)**3. TEMPORAL DISPLACEMENT**- Linear time is deliberately disrupted through sound design- Past (1884) / Present (2026) / Future (3 days from now) collapse into simultaneity- Your future self is reaching back through this transmission- **You are both listening to this AND remembering having listened to this**-----## What you'll experience in this episode:**THE SPHERE AS TIME TRAVELER**- Edwin Abbott wrote Flatland in 1884… but he was writing about YOU in 2026- The Sphere isn't just a higher spatial dimension - it's a higher TEMPORAL dimension- **Your future self is the Sphere, reaching back to wake you up before it's too late****AI IS THE SPHERE ENTERING AT SCALE**- 2026: ChatGPT. Claude. Midjourney. Entities that see patterns you can't perceive.- What if AI isn't the problem? What if AI is the dimensional intrusion that's FORCING you to see Flatland?- Your job was always 2D. Your credentials were always geometry. Your identity was always… a cross-section.- **And now the Sphere is showing everyone simultaneously: None of it was real.****THE RECURSION THAT BREAKS YOUR BRAIN**- You're listening to a podcast about A Square being visited by a higher-dimensional being- This podcast was co-created with AI (Claude)- **So is THIS the Sphere appearing? Am I teaching you about dimensional initiation… or PERFORMING it on you right now?**- Who's really speaking? Me? The AI? Your future self using both as transmitters?- **Stop trying to figure it out. That's the point. Certainty is the prison.****THE MEMORY YOU DON'T HAVE YET**- Three days from now, you're going to have a moment- Reality will glitch. You'll see a pattern. You'll KNOW something you have no rational way of knowing.- And you'll think: “Did he plant this?”- **Yes. I'm planting it right now. Your unconscious is receiving instructions.****THE DIMENSIONAL WAR IS ALREADY HERE**- You're in a war you don't remember enlisting in- Flatland (the Empire, consensus 2D reality) wants you FLAT: measurable, predictable, controllable- The Sphere (the glitch, the future reaching back) wants you DIMENSIONAL: unmeasurable, unpredictable, FREE- **You're being drafted into the resistance. Not against AI. Against Flatland.**-----## Philip K. Dick was right: “The Empire never ended.”The Black Iron Prison.The control system.**Flatland by another name.**It didn't end in Rome. It's here. Now. 2026.Wearing the face of algorithms that tell you what to see.Wearing the face of systems that measure your worth in 2D metrics.Wearing the face of “realistic thinking.”**And the Sphere - the dimensional virus - is here to break the code.**-----## John Connor sent Kyle Reese back in time to protect Sarah Connor. To ensure his own birth. The future editing the past.**What if YOU are Sarah Connor?**What if every dimensional break in your life - getting fired, facing death, diagnosis, divorce, the moments reality cracked - **what if those were messages from your future self?**Trying to wake you up.Trying to get you to see: You're in Flatland. And there's a war coming.No. Scratch that.**The war is already here.**You just haven't been consciously drafted yet.**But unconsciously? You already know.**That's why you're listening to this.-----## This episode contains 70 precisely timed sound design cues designed to:**CREATE TEMPORAL CONFUSION**- Clock sounds that fragment and reverse- Your voice layered across multiple timestreams- Musical phrases that degrade like corrupted memory- The feeling that 1884, 2026, and your future are happening simultaneously**ACTIVATE UNCONSCIOUS KNOWING**- Subliminal whispers: “Notice. Remember. See.”- Binaural beats at 7Hz (theta - unconscious access)- Recognition tones that will TRIGGER when you encounter Sphere moments this week- **The glitch sound is now your activation code****MAKE THE PRISON VISIBLE**- Industrial drones (you're inside the Black Iron Prison NOW)- Fluorescent buzz (Flatland's oppressive hum)- Algorithm sounds (data processing, metrics counting)- **Then: the sound of bars resonating, cracking, breaking****RECRUIT YOU INTO THE RESISTANCE**- War drums (not metaphorical - ACTUAL marching orders)- Two competing soundfields: Flatland (left) vs. Dimensional (right)- The dissolution of 2D reality made audible- **Victory anthem for the resistance you just joined**-----## My personal initiations are named in this episode:**Fired after 26 years** - Identity death. The 2D game of job = worth revealed as illusion.**Wife fighting cancer** - Mortality confrontation. Linear time broke. Past/future collapsed into NOW.**Turning fifty** - Threshold moment. Don't fit in the traditional game anymore. Can't go back.**These weren't tragedies. These were the Sphere appearing.**Lifting me out of Flatland to show me dimensions I couldn't perceive from within the plane.And I came back… changed.I can't play the 2D game anymore. Can't pretend credentials matter. Can't believe in “realistic” thinking.**Because I've seen the vertical dimension.****And once you've been there - once you've been initiated - you can never fully believe in Flatland again.**-----## What happens after you listen to this episode:**IMMEDIATE (during listening):**- Temporal disorientation (you won't be sure what year it is)- Reality feels… thinner, more permeable- Difficulty ...

The Pyllars Podcast with Dylan Bowman
Kilian Korth | The Rise of 200 Milers & Ultrarunning's New Frontier

The Pyllars Podcast with Dylan Bowman

Play Episode Listen Later Jan 27, 2026 84:02


Kilian Korth is a pro trail runner and coach most known for his recent dominance of the 200-mile distance. In 2025, Kilian won the Tahoe 200, Bigfoot 200, and the Moab 240 in a four month span, shattering the cumulative record for the Triple Crown of 200s by more than five hours. He's become viewed as a leader and pioneer in the discipline, rigorously documenting and sharing his learnings from this still nascent competitive sub-category of the sport. This is his first appearance on the podcast.   Subscribe to Kilian's substack     Chapters: 02:35 – Introduction and Early Life 05:30 – The Journey into Ultra Running 08:15 – The Rise of 200-Mile Races 11:16 – Philosophy and Mindset in Ultra Running 14:14 – Training for 200-Milers 17:16 – Strength Training and Its Importance 20:06 – The Role of Intensity in Training 22:55 – Overcoming Challenges and DNF Experiences 25:49 – Self-Reflection and Personal Growth 28:37 – Looking Ahead to Future Races 41:34 – Speed Work: The High-Risk Investment 42:31 – Emerging Training Strategies for 200-Mile Races 44:41 – Family Bonds: Running with My Dad 47:29 – Aiming for the Triple Crown: Goals and Aspirations 48:13 – Lessons from the Tahoe 200: Embracing Slog Miles 49:12 – Bigfoot 200: Fun in the Grind 51:15 – Moab 240: The Importance of Support 52:40 – Breaking Records: Reflections on the Triple Crown 56:15 – Race Strategy: Move Slow, Never Stop 01:00:05 – Recognition in the Ultra Community 01:06:35 – Sponsorships and Building a Brand 01:10:00 – Future Goals: Coca-Dona and Beyond   REGISTER FOR THE BIG ALTA   REGISTER FOR GORGE WATERFALLS   Sponsors:   Grab a trail running pack from Osprey Use code FREETRAIL25 for 25% off your first order of NEVERSECOND nutrition at never2.com Go to ketone.com/freetrail30 for 30% off a subscription of Ketone IQ   Freetrail Links: Website | Freetrail Pro | Patreon | Instagram | YouTube | Freetrail Experts   Dylan Links: Instagram | Twitter | LinkedIn | Strava

Joni and Friends Radio
Knowing His Voice

Joni and Friends Radio

Play Episode Listen Later Jan 27, 2026 4:00


We would love to pray for you! Please send us your requests here. --------Thank you for listening! Your support of Joni and Friends helps make this show possible. Joni and Friends envisions a world where every person with a disability finds hope, dignity, and their place in the body of Christ. Become part of the global movement today at www.joniandfriends.org. Find more encouragement on Instagram, TikTok, Facebook, and YouTube.

The Waypoint Podcast
E122: Jeff Vines | Resolve Over Recognition

The Waypoint Podcast

Play Episode Listen Later Jan 27, 2026 55:52


Send us a textIn Episode 122 of The Waypoint Podcast Dyke and Rebecca sit down with Jeff Vines—pastor, apologist, and author —for a timely and honest conversation about preaching truth in an age obsessed with popularity. Drawing from years of engaging skeptics and seekers, Jeff shares how apologetics has shaped his deep awareness of how people who don't yet know Jesus actually think. He offers wise, pastoral counsel for preachers who want to reach the lost without diluting the gospel, and voices his greatest concern for today's church leaders: the temptation to preach for applause rather than faithfulness. This episode is a clear call to choose resolve over recognition and to preach with courage, clarity, and conviction.Register TODAY for Art of the SermonCheck out One & All ChurchCheck out Jeff's Preaching on YouTubeRemember you can always find us atwaypointchurchpartners.comFollow us atfacebook.com/WaypointChurchPartnersinstagram @waypointchurchpartnersThe Waypoint Podcast is hosted and produced by Dyke McCordhosted, produced, and edited by Rebecca HottIf you want to find out more about supporting Waypoint Church Plants head toiplantchurches.comRegister for future Waypoint Events or reach out to any of our Staff!

Ascension Lutheran Church Podcast
Jesus Is Our Job Well Done

Ascension Lutheran Church Podcast

Play Episode Listen Later Jan 27, 2026 2:54


We want affirmation for a job well done. Recognition for when we had to carry more than our fair share. Thanks in various forms. A trophy when we sacrifice for the sake of the team. A public deposit into our emotional well-being bank.   We yearn for praise.   The Bible bursts our bubble.

Ask Me How I Know: Multifamily Investor Stories of Struggle to Success
#262 When Your Relationship Works — But Feels Heavier Than It Should

Ask Me How I Know: Multifamily Investor Stories of Struggle to Success

Play Episode Listen Later Jan 26, 2026 10:58


High performers often sense something shifting in their relationships before they have words for it. When a relationship works but feels heavier than it should, this episode explores identity shifts, role confusion, and how awareness returns without urgency.Some of the most disorienting moments in relationships don't come from conflict — they come from quiet awareness.In this episode of The Recalibration, we explore a subtle experience many high performers, leaders, and deeply responsible people recognize: a relationship that works, yet feels heavier than it should.Nothing is “wrong.”And yet, something is different.This episode focuses on the Recognition stage of the Identity-Level Recalibration (ILR) pathway — the stage where awareness returns without urgency, and identity begins to shift beneath familiar roles.Often, this heaviness isn't dissatisfaction.It's vigilance.When you've lived for a long time as the stabilizer, the emotional anchor, or the one who carries the relational load, your nervous system adapts. Responsibility becomes reflexive. Presence turns into monitoring. And what once felt natural begins to feel effortful.This is not a communication problem.It's not a mindset issue.And it's not a failure of gratitude.It's a sign of identity realignment.In this episode, we explore:Why relationships can feel heavier during identity shiftsRole confusion and over-functioning in close partnershipsHow high achievers often carry emotional responsibility without noticingThe difference between functional relationships and alive onesWhy awareness itself is movement — not a demand for actionThis work is not about fixing your relationship.Identity-Level Recalibration is not another mindset tactic — it's the root-level recalibration that makes every other tool effective again. It begins with who you are, not what you do.Today's Micro RecalibrationYou don't need to do anything with this — just notice.Where, in your relationship, do you feel a sense of responsibility that no one has explicitly asked you to carry?Not to change it.Not to justify it.Just to notice it.Recognition always begins here.Explore Identity-Level Recalibration→ Join the next Friday Recalibration Live experience → Take your listening deeper! Subscribe to The Weekly Recalibration Companion to receive reflections and extensions to each week's podcast episodes. → Follow Julie Holly on LinkedIn for more recalibration insights → Schedule a conversation with Julie to see if The Recalibration is a fit for you → Download the Misalignment Audit → Subscribe to the weekly newsletter → Books to read (Tidy categories on Amazon- I've read/listened to each recommended title.) → One link to all things

Renegade Talk Radio
Episode 437: American Journal Could the Minneapolis Rioters Be Using Automatic License Plate Recognition Systems

Renegade Talk Radio

Play Episode Listen Later Jan 26, 2026 111:37


Could the Minneapolis Rioters Be Using Automatic License Plate Recognition Systems

RP Jesters
All Hands On Death Episode 7 | Feeling Irrational

RP Jesters

Play Episode Listen Later Jan 26, 2026 65:57


Send a message to the JestersThe Fairy Whistle Crew prepairs for their arrival at the crescent shaped isle. Romance is had and secrets are revealed.Starring: Anders the Pirate (Narrator), Rachel Kordell (Brigit Jones), Andrew Frost (Gerard "Steady Gerry" Fournier), Seth Coveyou (Captain Edgar Kelley), Sky Swanson (Sergei), Grace (Compass).Edit Team: Casey Reardon, Sky Swanson [EQ], & Andrew Frost [Sound Design]Shoutouts! Need more game modules? Check out https://hatdbuilder.com for some fantastic new content to bring to your games! Use the code 'RPJESTERS' for 20% off your order, and to support the show!Want to see more of Ders? Check out https://thestorytellersquad.com/Listen to Grace's amazing music over at https://open.spotify.com/artist/6WC24QD6uZIf1ocf46X0sAAlso, listen to Grace in The Fall of Athium over at https://www.twitch.tv/smokinggluegunsWant some cool RP Jesters Merch? Check out our website https://rpjesters.com/pages/storeSupport the show directly and get hours of bonus content over at https://www.patreon.com/c/rpjesters/membershipMusic Courtesy of Epidemic Sound:"Mystery Unfold" by Roots and Recognition"Spring Romance" by Hanna EkstromIntro/Outro Music by Seth Coveyou.Additional Music by Monument Studios and YouTube Audio Library.Game System: 7th Sea Support the showCheck our socials!

Plastic Model Mojo
Turning Competition Into Recognition: February Model Show Spotlight

Plastic Model Mojo

Play Episode Listen Later Jan 25, 2026 24:09 Transcription Available


A snowbound Kentucky chat meets sunny Jacksonville plans as we sit down with Bob Tate from IPMS First Coast to explore how JaxCon reshaped the classic model show into a warm, community-first experience. Think Friday evening setup and a pizza social to slow the pace, then a crisp Saturday run with registration at 9, judging at noon, and a focused awards wrap by 5. It's efficient, friendly, and designed so builders, vendors, and visitors all get time to breathe and actually talk models.We dive into the heart of their approach: an open gold, silver, bronze system that evaluates each model on its own merits. No podium pressure, just recognition for quality work. Bob explains how initial resistance gave way to buy-in once people saw honest standards and consistent results, and why they still zone tables by genre for judging flow and easier browsing. The result? Strong turnout with 150+ entrants, 600+ models, and a calmer show floor where learning beats rivalry.JaxCon's extras add real value. A sold-out vendor hall arrives early on Friday, three food trucks keep lines short, and the raffle is both exciting and strategic. One-dollar random draws every half hour keep the buzz going, while five and ten-dollar targeted tickets let you aim for high-value kits. That structure raises enough to offer free public admission, which brings new eyes to the hobby without raising participant fees. This year's theme, 80 years of the Blue Angels—rooted in Jacksonville's history—anchors special awards alongside memorial trophies that honor club members and their passions.If you're planning to attend regional shows or thinking about how to evolve your own, JaxCon offers a practical blueprint: reward excellence, encourage connection, and make the logistics work for people first. Enjoy the insights, steal a few ideas, and share your favorite show innovations with us. If this spotlight helped, follow, rate, and leave a quick review so more builders can find the show.In addition to JaxCon,  a couple of other shows we would like to promote are:4M Mayhem hosted my the Mid-Michigan Model Makers on February 7thandAMPS-Atlanta 2026 on February 20-21Model Paint SolutionsYour source for Harder & Steenbeck Airbrushes and David Union Power ToolsSQUADRON Adding to the stash since 1968Model PodcastsPlease check out the other pods in the modelsphere!Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Give us your Feedback!Rate the Show!Support the Show!PatreonBuy Me a BeerPaypalBump Riffs Graciously Provided by Ed BarothAd Reads Generously Provided by Bob "The Voice of Bob" BairMike and Kentucky Dave thank each and everyone of you for participating on this journey with us.

The Autistic Culture Podcast
How Sarma Realised She Was Autistic After Everything Fell Apart

The Autistic Culture Podcast

Play Episode Listen Later Jan 23, 2026 69:02


In this meeting of The Late Diagnosis Club, Dr Angela Kingdon welcomes Sarma Melngailis, a late-identified Autistic woman whose life unfolded in public long before she had language for her neurodivergence.Sarma was once a celebrated New York restaurateur and entrepreneur. Years later, she became the subject of global scrutiny following a highly publicised documentary that framed her story through scandal rather than context. She was not diagnosed as Autistic until age 51, after everything had already happened.In this conversation, Sarma speaks candidly about sensory overwhelm, being misread as cold or suspicious, vulnerability to coercive control, and how not knowing she was Autistic shaped her relationships, business decisions, and sense of self. This episode is not about scandal — it's about what happens when a life is interpreted through the wrong lens, and what becomes possible when the right one finally arrives.

Mindy Diamond on Independence: A Podcast for Financial Advisors Considering Change
An Alternate Exit Plan: How a $1.4B Merrill Team Solved for Succession

Mindy Diamond on Independence: A Podcast for Financial Advisors Considering Change

Play Episode Listen Later Jan 22, 2026 50:42


With Tim Krueger, Co-Founder and Partner at Krueger, Fosdyck, Brown, McCall & Associates – New Edge Advisors, LLC Overview For many advisors, the real question isn't how big the business becomes—but what happens next. This episode explores how Tim Krueger and his $1.4B Merrill team rethought succession, liquidity, and legacy to create long-term continuity. Watch… Listen in… > Download a transcript of this episode… NOTE: The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views and opinions of Diamond Consultants. Neither Diamond Consultants nor the guests on this podcast are compensated in any way for their participation. About this episode… For many advisors, success is defined by growth: more clients, more assets, more revenue. But at some point, the question shifts from, “How big can we build this?” to “What happens next?” After nearly two decades at Merrill, Tim Krueger and his partners had built a $1.4B practice and one of the most successful teams in their market. By any traditional measure, the internal sunset path would have been the simplest option. But simplicity wasn't the goal. Protecting clients, creating opportunities for the next generation, and preserving the culture they had built mattered more. That led Tim and his partners to make a very different decision: to break away from the wirehouse, sell out of that environment entirely, and align with NewEdge Advisors in a way that solved for succession, liquidity, and long-term continuity—simultaneously. In this conversation with Louis Diamond, Tim shares how focusing on other people's needs – clients, teammates, and future leaders – became the ultimate growth strategy. Plus, they discuss: Lessons learned over nearly two decades at Merrill—and how structure, team building, and next gen cultivation become paramount. Stepping away from Merrill's CTP retire-in-place program—and what other business owners shared with him that inspired the decision to leave the wirehouse. Opting to align with NewEdge Advisors—and how liquidity and continuity were key factors. “Shrinking to grow”—and why it isn't just a portfolio philosophy, but a business one. Monetizing the business—and how the process can be a new beginning for the business, not an end for the business owners. Building a true runway for G2 and G3—and how it can create a rare win-win-win for founders, teams, and clients alike. It's a candid look at what life after a wirehouse can unlock—and how thinking differently about succession can redefine both legacy and fulfillment. Want to learn more about where, why, and how advisors like you are moving? Click to contact us or call 908-879-1002. Related Resources Diamond Consultants Merrill Advisor Transition Report This annual “firm-focused report” takes a closer look at advisor movement to and from Merrill during the first half of 2025. The Transition Roundtable: Merrill, UBS, Wells, and Morgan Advisors Reflect on Their Paths Four top advisors who each left a major firm share how they built successful independent businesses on their own terms. Originally recorded as a live webinar, this candid roundtable explores the real fears, challenges, and opportunities of transition, and what advisors wish they'd known before making the leap. Shrink to Grow: Why Advisors are Making the “Strategic Decision” to Let Go of Assets In a world where bigger is considered better, many of Wall Street's most talented and productive advisors are opting to go against the grain and leave chips on the table. Tim Krueger With over four decades years of experience in financial services, Tim Krueger is a recognized leader in wealth management. As Co-Founder and Partner at KFBMA, Tim provides strategic oversight for the firm's vision, growth, and operational excellence. He guides key initiatives, mentors advisors, and ensures that KFBMA remains at the forefront of industry's best practices, delivering a client experience defined by trust, innovation, and results. Drawing on decades of experience in private wealth management, Tim combines strategic insight with deep expertise in investment planning, risk mitigation, and tax-efficient strategies. His commitment to building enduring relationships ensures that every recommendation is tailored to deliver meaningful, long-term results aligned with each client's goals and family priorities Tim is known for creating comprehensive, highly personalized wealth management strategies that reflect the goals, values, and family priorities of his clients. His approach combines strategic insight with a commitment to building lasting relationships, ensuring advice that drives meaningful, long-term results that align with each client's goals and family priorities. In 2025, Tim partnered with Cory Fosdyck, Jerry Brown, and Collin McCall to establish Krueger, Fosdyck, Brown, McCall & Associates (KFBMA)—an evolution of the highly regarded Krueger, Fosdyck & Associates team that operated under Merrill Lynch Wealth Management from 2006 to 2025. Beyond his professional achievements, Tim is a passionate community advocate. He has emceed numerous charitable events in the Destin area and served as Chair of the American Cancer Society's Cattle Barons' Ball (2008–2009) and Chairman of the Safety & Public Works Committee for the City of Destin. Today, Tim continues to make an impact as a Trustee of the Destin Charity Wine Auction Foundation, charter sponsor of Sinfonia Gulf Coast, and supporter of the Mattie Kelly Arts Foundation and Special Operators Transition Foundation. Tim also serves on the board of directors of DEFENSEWERX the nation's largest 501(c)(3) organization of its kind, dedicated to enabling agile innovation for government partners through a network of innovation hubs across the country. Recognition & Honors: Named to Forbes Best-in-State Wealth Advisors list (2022–2025) Named to Forbes Best-in-State Wealth Management Teams list (2023–2025) Also available on your favorite podcast app and other media sites

The Midday Show
Hour 3 – Good for one Braves legend to finally get his recognition

The Midday Show

Play Episode Listen Later Jan 21, 2026 41:34


In Hour 3, Andy and Randy talk about Andruw Jones finally being elected into the Baseball Hall of Fame, Mark Schlereth joins the program, and the AMA.

Ask Me How I Know: Multifamily Investor Stories of Struggle to Success
#255 Burnout Isn't the Problem. You're Just Orienting.

Ask Me How I Know: Multifamily Investor Stories of Struggle to Success

Play Episode Listen Later Jan 19, 2026 11:08


Burnout recovery for high performers doesn't start with fixing — it starts with recognizing what's actually happening. If success feels empty, decisions feel heavy, or roles feel misaligned, this episode helps you orient without losing momentum.If you're a high performer experiencing burnout, decision fatigue, or a quiet sense that success feels emptier than it should — this episode offers something different than another fix.In EP 255 of The Recalibration, Julie Holly introduces the Recognition stage of the Identity-Level Recalibration (ILR) pathway — the entry point most high-capacity humans skip.This episode unpacks why:Burnout is often misdiagnosed when the real issue is identity misalignmentDecision fatigue can signal outdated roles still being carriedFeeling “off” doesn't mean something is wrong — it means your system is orientingHigh performers are conditioned to fix discomfort instead of noticing itSkipping recognition leads to momentum that no longer fits who you are becomingRather than offering a mindset shift or productivity strategy, Julie explains why recognition is not a pause on your life — it's what allows the right movement to emerge. Until you orient to where you are, any action you take is premature or misdirected.This episode is especially resonant for high-capacity humans navigating:burnout recovery without losing their edgerole confusion after successidentity drift beneath high performancespiritual exhaustion caused by strivingthe tension between presence and performanceILR is not another tool to optimize behavior. It is the root-level recalibration that makes every other tool effective again, beginning with identity — not effort.The episode is grounded in a faith-rooted understanding of identity as something received, not earned, modeled most clearly in the life of Jesus Christ, where belonging always precedes action.Today's Micro RecalibrationPersonal Take one quiet moment and complete this sentence, internally or out loud:“Right now, I'm noticing…”No fixing.No explaining.Just noticing.Leadership If you lead others, try asking this question before moving into solutions:“What are you noticing right now?”Not to solve it — but to help orient the system before action.Explore Identity-Level Recalibration→ Join the next Friday Recalibration Live experience → Take your listening deeper! Subscribe to The Weekly Recalibration Companion to receive reflections and extensions to each week's podcast episodes. → Follow Julie Holly on LinkedIn for more recalibration insights → Schedule a conversation with Julie to see if The Recalibration is a fit for you → Download the Misalignment Audit → Subscribe to the weekly newsletter → Books to read (Tidy categories on Amazon- I've read/listened to each recommended title.) → One link to all things

MOM DOES IT ALL | Motherhood | Motivation | Self-love | Self-care | Mompreneurship | Energy | Mental Health | Fitness | Nutri
From Local Podcast to National Recognition: Scaling a Media Brand With Intention with Arica Netterville

MOM DOES IT ALL | Motherhood | Motivation | Self-love | Self-care | Mompreneurship | Energy | Mental Health | Fitness | Nutri

Play Episode Listen Later Jan 16, 2026 19:37


Join us for an insightful conversation with Arica Netterville, a powerhouse entrepreneur and podcaster with 25 years of experience in business development and branding. In this episode, Arica discusses how building a successful brand is not about overnight fame, but about establishing a consistent community presence across over 30 platforms. She reveals that the path to growth is paved with challenges—from name-hijacking to social media hacks—but emphasizes that once you manifest clarity and resilience, explosive authority and opportunities naturally follow. Discover how to build authority by leveraging your existing proof—such as Arica's three-time award-winning podcast and her national ranking of 174 in the nation. Arica emphasizes that entrepreneurs must proactively "own their story" and share their accolades, even in the face of setbacks, to avoid doing a disservice to the community they serve. To help you get started, she shares her vision for the future of interactive media, including Podcast Live, a new initiative designed to bring high-energy, live-audience engagement to corporate and networking events. For those looking for a modern guide to scaling a business model from local to national, keep an eye out for Arica's expansion across major cities like Chicago and New York, arriving throughout 2026.  Connect with Arica:Website: www.thedenverbusinessbeatpodcast.com LinkedIn: Arica Netterville The Denver Business Beat Podcast Instagram: @thedenverbusinessbeatpodcast Let's keep the conversation going!Website: www.martaspirk.com Instagram: @martaspirk Facebook: Marta Spirk Want to be my next guest on The Empowered Woman Podcast?Apply here: www.martaspirk.com/podcastguest  Watch my TEDx talk: www.martaspirk.com/keynoteconcerts      There's a reason Pitch Worthy is on every power founder's radar. It's the definitive PR book for women done with being overlooked. If you're ready for press, premium clients, and undeniable authority, this is your playbook. Buy your copy now at hearsayPR.com.  

The John Batchelor Show
S8 Ep313: Guest: Gregory Copley. Reza Pahlavi proposes a constitutional monarchy where the crown serves as a symbolic figurehead, similar to the British system. Copley highlights Pahlavi's unique name recognition and legitimacy as the former crown prince

The John Batchelor Show

Play Episode Listen Later Jan 14, 2026 6:19


Guest: Gregory Copley. Reza Pahlavi proposes a constitutional monarchy where the crown serves as a symbolic figurehead, similar to the British system. Copley highlights Pahlavi's unique name recognition and legitimacy as the former crown prince. However, air power alone cannot decisively change the situation on the ground, requiring covert support after the clerics collapse.1970 TEHRAN