Podcasts about Doubling

  • 1,848PODCASTS
  • 2,621EPISODES
  • 37mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Doubling

Show all podcasts related to doubling

Latest podcast episodes about Doubling

TORQUE UP
From 1 Van to 22 Electricians in 6 Years | DEC ELEC's Blueprint for Growth

TORQUE UP

Play Episode Listen Later Feb 13, 2026 70:02


What does it really take to scale an electrical business?In this episode of Torque Up, we sit down with Declan from DEC ELEC who built his company from a one man operation to a 22 man team in just six years.No fluff. No fake hype.We break down:• The 16 hour shifts at the start• When growth nearly broke him• Hiring his partner to systemise the business• Why tracking, structure and process are everything• Winning council and housing association contracts• The difference between turnover and profit• Doubling revenue year after year• Stepping off the tools and becoming a true business owner• Investing in coaching and personal growthIf you are an electrician thinking about scaling, this episode will change how you see your business.Sponsored by Chauvin Arnoux UK

Thrivetime Show | Business School without the BS
Cleaning Business Podcast | How to Double Your Profits Without Doubling Your Working Hours | 7 Clay Clark Client Success Stories

Thrivetime Show | Business School without the BS

Play Episode Listen Later Feb 12, 2026 51:55


Welcome to the ThrivetimeShow.com Cleaning Business Podcast Series. During this 100 episode business coach podcast series Clay Clark teaches how you can achieve success in automotive repair, carpet cleaning, dog training, grooming, home building, home cleaning, home remodeling, manufacturing, medical, online sales, podcasting, photography, signage, skin care, and other industries. #CleaningBusinessPodcast   Where You Find Thousands of Clay Clark Client Success Stories?  https://www.thrivetimeshow.com/testimonials/    Breaking Down the 1,462% Growth of Stephanie Pipkin with Clay Clark: An EOFire Classic from 2022 - https://www.eofire.com/podcast/clayclark8/    Who is Clay Clark?  Clay Clark is the co-founder of five kids, the host of the 6X iTunes chart-topping ThrivetimeShow.com Podcast, the 2007 Oklahoma SBA Entrepreneur of the Year, the 2002 Tulsa Metro Chamber of Commerce Young Entrepreneur of the Year, an Amazon best-selling author, a singer / song-writer and the founder of several multi-million dollar businesses.  https://www.forbes.com/councils/forbescoachescouncil/people/clayclark/    Where Can You Learn More About Clay Clark? https://www.thrivetimeshow.com/need-business-coach/#coaching-about-founders    Where Can You Read Clay Clark's 40+ Books? https://www.amazon.com/stores/Clay-Clark/author/B004M6F5T4?ref=sr_ntt_srch_lnk_1&qid=1767189818&sr=8-1&shoppingPortalEnabled=true    Where Can You Discover Clay Clark's Songs & Original Music?  https://open.spotify.com/album/2ZdE8VDS6PYQgdilQ1vWTP?si=Am65WUlIQba4OLbinBYo1g   

Contact w/ Chris O'Connor
Doubling up - Josh Francis - Stuff Island #221

Contact w/ Chris O'Connor

Play Episode Listen Later Feb 12, 2026 62:03


Tommy and Chris are joined this week by Josh Francis from the Friendly Fire Podcast Comedians Chris and Tommy Pope are making all kinds of Stuff on the paytch. Each week they talk about anything & everything under the sun. Tommy also chefs up some delicious meals. It's a blast, folks. Check out our second channel @LookatDish where Tommy Pope and Chris O'Connor cook elaborate meals with your favorite comedians Head to https://www.squarespace.com/STUFFISLAND to save 10% off your first purchase of a website or domain using code STUFFISLAND. #ad Get 10% off your first month of BlueChew Gold with code STUFFISLAND. That's promo code STUFFISLAND. Visit https://www.BlueChew.com for more details and important safety information #comedy SUB TO PATREON: patreon.com/stuffisland Control Body Odor ANYWHERE with @shop.mando and get $5 off off your Starter Pack (that's over 40% off) with promo code [STUFFISLAND] at https://www.Mandopodcast.com/[STUFFISLAND]! #mandopod Click the link http://kalshi.com/r/stuff or download the Kalshi App and use code STUFF to sign up and trade today! #ads Download Cash App Today: [https://capl.onelink.me/vFut/knz4su0l #CashAppPod. Cash App is a financial services platform, not a bank. Banking services provided by Cash App's bank partner(s). Prepaid debit cards issued by Sutton Bank, Member FDIC. See terms and conditions at https://cash.app/legal/us/en-us/card-agreement. Cash App Green, overdraft coverage, borrow, cash back offers and promotions provided by Cash App, a Block, Inc. brand. Visit http://cash.app/legal/podcast for full disclosures Follow Chris on IG: https://www.instagram.com/achrisoconnor Follow Tommy on IG: https://www.instagram.com/tommyjpope #comedy #comedypodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

What the Fixed Ops?! (WTF?!)
Doubling Growth in Fixed Ops - #automotive #shorts #dealership

What the Fixed Ops?! (WTF?!)

Play Episode Listen Later Feb 12, 2026 0:56


Scott Falcone shares how his fixed operations performance is up 100%, with both stores doubling their numbers. Expansion followed, more space, more capacity, and a stronger service operation built for growth.Watch the full episode: https://youtu.be/epq_A01PbKgGlobal Dealer Solutions offers a network of high-performance providers while remaining product agnostic. Knowing which tools to deploy makes a big difference. Having a trusted adviser; priceless. Schedule your complimentary consultation today. https://calendly.com/don-278. BE THE 1ST TO KNOW. LIKE and FOLLOW HERE www.linkedin.com/company/fixed-ops-marketinghttps://www.youtube.com/channel/@fixedopsmarketingGet watch and listen links, as well as full episodes and shorts: www.fixedopsmarketing.com/wtfJoin Managing Partner and Host, Russell B. Hill and Charity Dunning, Co-Host and Chief Marketing Officer of FixedOPS Marketing, as we discuss life, automotive, and the human journey in WTF?!#podcast #automotive #fixedoperations

The Kim Constable Podcast
Missed a Workout? Why Doubling Up Is a Mistake (And What Actually Builds Muscle)

The Kim Constable Podcast

Play Episode Listen Later Feb 11, 2026 16:28


If you miss a workout, should you double up the next day? Short answer: No. In this episode, Kim explains why doing two sessions in one day often leads to holding back — and why that kills muscle growth. You'll learn: Why strength training should be treated like a sprint, not a marathon How subconsciously conserving energy limits your results What “training to failure” actually means Why finishing the full minute isn't the goal How women misunderstand load and intensity Why muscle only grows when you force it to do more than it currently can If you're working hard but not seeing muscle growth, this episode will shift your entire mindset. Follow Kim:

Brand It, Build It Podcast
303: The Mobile Design Strategy That's Doubling Conversions for Our Clients

Brand It, Build It Podcast

Play Episode Listen Later Feb 9, 2026 5:44


If you've been wondering whether your mobile site is actually working for you, keep reading. This strategy might just transform how you approach web design.Hosted by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Kelly Zugay⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠of ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠With Grace and Gold®⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ — ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Brand It, Build It Podcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, a chart-topping small business marketing podcast, equips you to build and grow your creative small business with purpose and strategy.Podcast Show Notes: https://withgraceandgold.com/category/podcast/With Grace and Gold on Instagram: https://www.instagram.com/withgraceandgoldFree Resources from With Grace and Gold: https://www.withgraceandgold.com/freeHonored as Showit Designer of the Year, With Grace and Gold® has proudly served thousands of small businesses and creative founders worldwide through award-winning, elevated, purpose-driven brand and ⁠⁠⁠Showit web design⁠⁠⁠ since 2014. For custom brand design services, custom Showit web design services, and easy-to-customize Showit website templates for fine art photographers, event planners, wedding professionals, interior designers, and creatives, please visit With Grace and Gold: www.withgraceandgold.com

Durand on Demand
From the Vault: Doubling Your Productivity and Job Satisfaction | Episode 3

Durand on Demand

Play Episode Listen Later Feb 9, 2026 48:46


From the Vault: Episode 3 of The Dave Durand Show. Dave sits down with Matthew Pinto—founder and President Emeritus of Ascension Press—to talk about leadership inside mission-driven organizations and why the ability to pivot (and track your numbers) matters in any business. Dave also breaks down how to push past your potential through intense, precise work paired with truly intentional rest. Plus, listener Q&A on working with difficult co-workers and how to protect your culture when a high performer starts hurting team morale.

CruxCasts
Metals Exploration (LSE:MTL) - Doubling Gold Output as Build on Track & On Budget

CruxCasts

Play Episode Listen Later Feb 9, 2026 33:31


Interview with Darren Bowden, CEO of Metals Exploration PLCOur previous interview: https://www.cruxinvestor.com/posts/metals-exploration-lsemtl-nicaragua-build-on-track-dupax-abra-targets-add-long-term-upside-8132Recording date: 4th February 2026Metals Exploration, the AIM-listed gold producer, is executing a strategic transformation that will more than double its annual production to 140,000 ounces by 2027 through its La India project in Nicaragua. CEO Darren Bowden outlined the company's ambitious growth trajectory as construction proceeds ahead of schedule and within its revised budget parameters.The company currently operates the Runruno mine in the Philippines, which is expected to produce approximately 55,000 ounces in 2026 before exhausting its reserves in December. However, cash flow from this operation is fully funding the development of La India without requiring significant equity dilution—a commitment management has emphasized to shareholders.La India represents a substantial upgrade to Metals Exploration's production profile. The project will transition the company from open-pit to underground mining at higher grades and lower costs, with initial production targeting just over 100,000 ounces in 2027, ramping to 140,000 ounces as underground operations commence. The deposit contains 2.4 million ounces of resources supporting a 12-15 year mine life, with significant exploration upside across multiple epithermal vein systems.Construction progress remains robust, with the company ending 2025 with $45 million in cash and all major capital expenditures committed. Management expects to generate over $100 million in free cash flow from Runruno operations, comfortably covering the remaining $90 million required to complete La India. The critical path centers on electrical infrastructure connections, while mechanical and steel erection work proceeds smoothly.Beyond La India, Metals Exploration is actively pursuing construction-ready assets in Central America and Asia where its processing plant and experienced construction team can be redeployed following Runruno's closure. With a market capitalization exceeding £400 million and strong cash generation, the company is positioned to pursue opportunities in jurisdictions where other operators remain reluctant, leveraging direct government relationships and proven execution capabilities to access quality assets at attractive valuations.View Metals Exploration's company profile: https://www.cruxinvestor.com/companies/metals-exploration-plcSign up for Crux Investor: https://cruxinvestor.com

The Information's 411
Why Alphabet is Doubling AI CapEx, Nvidia Delays Gaming Chip, OpenAI Hires AI Consultants

The Information's 411

Play Episode Listen Later Feb 5, 2026 32:44


Bain Capital Ventures' Saanya Ojha talks with TITV Host Akash Pasricha about Alphabet's massive $175 billion CapEx plan and the deepening fear within the SaaS market. We also talk with Lightspark CEO David Marcus about the brutal crypto selloff and his viral essay on PayPal's decline, and we get into OpenAI's aggressive enterprise hiring push with our reporter Sri Muppidi.Articles discussed on this episode: https://www.theinformation.com/articles/nvidia-delay-new-gaming-chip-due-memory-chip-shortagehttps://www.theinformation.com/articles/openai-hiring-hundreds-ai-consultants-boost-enterprise-saleshttps://www.theinformation.com/briefings/alphabet-projects-doubled-capex-strong-fourth-quarterhttps://www.theinformation.com/briefings/former-paypal-executive-marcus-explains-payments-firm-lost-waySubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/

Turning Point with Priya Sam
I've Done Everything Right… So Why Am I Stuck?

Turning Point with Priya Sam

Play Episode Listen Later Feb 3, 2026 17:02


Priya Sam is the Host & Founder at Unleash Your Voice, where she helps ambitious women in six-figure corporate roles who struggle with being overlooked and undervalued step into undeniable executive presence and unlock the promotions, visibility, and compensation they truly deserve. Her signature Power Story Framework is a key part of all of her programs as she helps her clients build their personal power story banks, strategic communication skills, and confidence.In this episode, you'll learn:→ Why working harder stops working after the first 5–10 years of your career→ The three strategic shifts that helped Priya double her salary and accelerate her leadership trajectory→ How to advocate for yourself without sounding pushy or self-promotional→ Why building champions is essential for career growth—and how to do it intentionally→ How to create a strategic personal brand that positions you for promotions, visibility, and influence

founders turning stuck repeating priya doubling unleash your voice career accelerator
The Resilient Recruiter
How to Build 8 Revenue Streams That Grow Your Agency in Any Market, with Gerard Koolen

The Resilient Recruiter

Play Episode Listen Later Feb 2, 2026 57:54


Why do some recruitment agencies collapse during recessions while others keep growing? Gerard Koolen has watched his business expand through three major crises: the 2008 financial collapse, COVID-19, and the war in Ukraine. Each time, Lugera grew. Not by working harder, but by building the business differently. Gerard is the founder of Lugera, a recruitment agency with 11 offices, over 500 employees, and €243 million in annual revenue. Operating across a region tested by economic downturns and geopolitical instability, his firm has been forced to adapt repeatedly. Instead of relying on a single revenue stream, Gerard built what he calls an “all-seasons service portfolio.” Over time, Lugera developed eight distinct revenue streams. When permanent hiring slowed, outplacement surged. When clients froze recruitment, other services stepped in. One stream compensated for another, keeping the business resilient when markets turned. That approach created a new problem. Managing eight revenue streams manually nearly broke the company. Gerard invested 10 years and €2.2 million in building technology to automate work that once required a team of 30 people. Today, one part-time employee handles what used to take an entire department. In this episode, Gerard breaks down how the model works. He explains how to monetise the 99% of candidates most agencies never place, why traditional ATS systems quietly limit growth, and how outplacement can become a counter-cyclical revenue stream. If you want to build a recruitment business that grows through uncertainty instead of being crushed by it, this conversation will change how you think about revenue, technology, and resilience. What you'll learn: Why Lugera grew 20% during the 2008 recession What an “all-seasons service portfolio” looks like in practice Why most agencies monetise only 0.2% of their candidate database How eight revenue streams reduce risk and smooth volatility Why your ATS may be capping your growth without you realising How automation replaced 30 staff with one part-time role Why does outplacement generate revenue when hiring stops Episode highlights: [03:30] Growing during the 2008 recession [05:53] The all-seasons service portfolio [08:01] Monetising the 99% of candidates you never place [14:34] The real cost of building the technology [19:42] Why most ATS platforms restrict growth [27:02] Doubling placements without doubling effort [31:33] Turning outplacement into a €1M revenue stream [45:25] Automated outreach that converts job ads into leads Sponsor This episode is brought to you by Recruiterflow. Recruiterflow is an AI-first ATS and CRM built to help recruitment businesses run and scale more efficiently. It combines ATS, CRM, sequencing, data enrichment, marketing automation, and AI agents in one platform. Many leaders in our coaching community rely on Recruiterflow to streamline operations and improve execution. Learn more or request a demo at recruitmentcoach.com/recruiterflow Guest Bio Gerard Koolen is the founder of Lugera, a recruitment and staffing agency with 11 offices across Eastern Europe, over 500 employees, and €243 million in annual revenue. After investing 10 years and €2.2 million developing proprietary AI matching technology, Gerard created the Recruitment Revenue Platform to help recruitment agencies build multiple revenue streams beyond traditional placement fees. This is Gerard's third appearance on The Resilient Recruiter. Connect with Gerard LinkedIn: Gerard Koolen Website: lugera.com Recruitment Revenue Platform: recruitmentrevenueplatform.com Special offer: Get 100 free credits plus personal onboarding at recruitmentcoach.com/staa Connect with Mark Free strategy session: recruitmentcoach.com/strategy-session LinkedIn: Mark Whitby Instagram: @RecruitmentCoach Subscribe to The Resilient Recruiter

The Cashflow Project
Doubling Business Valuation and Building Legacy: The Inspirational Journey of Marc Adams

The Cashflow Project

Play Episode Listen Later Jan 30, 2026 46:57


Welcome back to The Cashflow Project! In this episode, Marc Adams—strategy mentor, business exit planner, bestselling author, and cancer survivor—shares how a stage four diagnosis during the pandemic reshaped his mission from building companies to helping founders maximize business value and keep more wealth when they exit. Marc dives into strategies that help owner-led businesses increase valuation without additional spending, navigate the tax complexities of selling a company, and build a legacy that goes beyond profit. Drawing from his experience in real estate and his “Double and Keep It Blueprint,” he offers practical insights on creating impact while preparing for a successful exit. Whether you're scaling a business, planning a future sale, or looking for inspiration to take decisive action, this episode delivers powerful lessons on purpose, resilience, and building wealth with intention. [00:00] "Entrepreneurial Journey and Lessons" [03:35] "Private Equity Reflections and Cancer" [09:06] "Good News, Bad News at Hospital" [12:19] "Double Business Value Blueprint" [14:40] From Cancer to Business Success [18:24] "Navigating Business Growth Challenges" [19:41] "Global AI-Powered Business Solutions" [22:59] "Living with Purpose and Impact" [26:16] "Rethinking Education: Online Learning" [29:19] "Reflections on Redemption and Purpose" [32:37] Business Continuity and Legacy Planning [38:31] Start, Adjust, and Progress [40:46] "Comfort Breeds Apathy" [44:52] "Inspiring Journey with Mark Adams" [46:13] "Subscribe, Share, Take Action" Connect with Marc Adams! Website Website 2 LinkedIn Connect with The Cashflow Project! Website LinkedIn YouTube Facebook Instagram

Broeske and Musson
OUT OF TOUCH: Fresno Unified Workers Blast Trustees for Doubling Their Pay

Broeske and Musson

Play Episode Listen Later Jan 29, 2026 7:57 Transcription Available


Fresno Unified bus drivers, custodians, and teachers are outraged after the school board voted 6–1 to more than double trustee stipends from about $2,110 to $4,500 a month amid a multimillion‑dollar budget deficit. Classified workers called the raise a “huge slap in the face,” noting they’re still fighting for a fair contract while the district faces deep cuts. Teachers also condemned the move as “tone‑deaf,” especially as the district prepares for $50 million in reductions over the next two years. Please Like, Comment and Follow 'Broeske & Musson' on all platforms: --- The ‘Broeske & Musson Podcast’ is available on the KMJNOW app, Apple Podcasts, Spotify or wherever else you listen to podcasts. --- ‘Broeske & Musson' Weekdays 9-11 AM Pacific on News/Talk 580 AM & 105.9 FM KMJ | Facebook | Podcast| X | - Everything KMJ KMJNOW App | Podcasts | Facebook | X | Instagram See omnystudio.com/listener for privacy information.

The Freelancer's Teabreak
Book Doubling for Neurodivergent Writers with Gail Doggett

The Freelancer's Teabreak

Play Episode Listen Later Jan 29, 2026 43:49


Welcome back to The Freelancer's Tea Break! This week, I'm joined by my friend, coach, and book genie, Gail Doggett, founder of Book Doubling, a book coaching business tailored for neurodivergent writers, especially those with ADHD. In this episode, we delve into how Gail aids writers in completing their dream books by using techniques like body doubling and screen sharing. We also discuss Gail's own journey with writing, the importance of developing a writing practice, and the unique challenges and strengths of neurodivergent individuals in the writing process. What You'll Learn in This Episode How Book Doubling supports ADHD and neurodivergent writers Why body doubling boosts focus and reduces overwhelm How to build a writing practice that works with your brain The power of handwriting, single‑tasking, and reflective writing How to write consistently when freelancing or juggling life changes The benefits of co‑writing sessions and community accountability How Substack can help you reconnect with your writing voice Gail Doggett is a neurodivergent writing coach, editor, and former acquisitions editor at a Big Five publishing house. Gail has spent over 20 years supporting writers from messy first drafts to finished books, with a special focus on late-diagnosed neurodivergent women who are juggling creativity, life, and everything in between. As an ADHDer herself, Gail brings compassion, strategy, and a deep understanding of how neurodivergent brains actually work, plus her own signature approach called Book Doubling, which is like body doubling but with editorial wisdom and accountability. She loves stories, knows exactly what it feels like to be stuck, and is here to help writers finally get their work over the finish line. Join Gail on Substack or Connect with her on Instagram | Facebook | TikTok | LinkedIn | Website | Newsletter Join Gail's Write Club membership Timestamps: 00:00 Introduction and Guest Introduction 00:54 Gail's Background and Book Doubling Concept 01:37 Challenges Faced by Neurodivergent Writers 02:25 The Importance of Self-Belief and Support 04:53 Establishing a Writing Practice 08:21 Body Doubling and Screen Sharing Techniques 13:34 Tracking Progress and Reflecting on Writing 19:28 Adapting to Life Changes and Finding Focus 20:58 Freelancing and Writing in Pockets of Time 22:22 Staying Connected with Your Writing 23:14 The Joy of Writing vs. The Chore of Writing 24:47 The Power of Handwriting 26:38 Reflection and Adjustment in Writing 27:40 The Importance of Single-Tasking 29:52 Co-Writing Sessions and Their Benefits 31:21 Introducing Write Club 33:52 One-to-One Writing Support 36:00 The Magic of Substack 42:53 Conclusion and Future Plans Follow me on Instagram Follow me on Bluesky Email: hello@emmacossey.com  Come join us in the free Freelance Lifestylers Facebook group Want more support? Check out the Freelance Lifestyle School courses and membership. Join the Freelance Lifestyle Discord Community: https://discord.gg/RKYkReS5Cz

AV SuperFriends
AV SuperFriends: Off the Rails - I'm a psychologist, but I'm not YOUR psychologist

AV SuperFriends

Play Episode Listen Later Jan 29, 2026 90:01


Recorded January 23, 2026 In this episode, the panel looks into the growing disconnect between AI hype and the realities of higher-ed operations. As campuses spin up task forces and strategy decks, the people closest to the work are still trying to explain that real progress comes from solid processes, clean data, and systems that actually integrate, not shiny tools bolted on top of chaos. The conversation drifts (on purpose) through buzzwords, half-baked pilots, and the familiar frustration of being asked to support technologies that skipped the fundamentals. Along the way, the panel brings a healthy mix of skepticism, humor, and hard-earned experience from living at the intersection of AV, IT, and reality. It's part therapy session, part reality check, and part drive-time radio nonsense. If you're sitting on an AI committee, dodging hype cycles, or just trying to keep classrooms running while everything gets labeled "AI-powered," this one will hit close to home.   News article: European Journal of Education - Intelligent Classrooms: How AI and IoT Can Reshape Learning Spaces Connect with Dr. Kati Peditto: https://www.linkedin.com/in/katipeditto/ DLR Group "Evolution of Campus"  https://www.dlrgroup.com/firm-news/evolution-of-campus-research-outcomes/   Alternate show titles: You drive the bus you throw people under It's a switch on the wall The AI-native We're gonna need you to talk about this with other people Two-factor three different times The sun couldn't rise tomorrow The cooperative nature this will require is one of the biggest hurdles We're doing this and if not you're getting fired Start in an existing building How insistent are you? AI-enabled whatchamadoolie Your internal relationships are not my problem Wait, can I give some context real quick? What the hell is the ask? Define something! Learning styles don't exist Information isn't as important as it used to be Florida ceiling They've conversated with the clients Am I gonna be the wet blanket here? Just give me some bread crumbs If your university has X amount of money How do we know we've elevated the human experience? Doubling the number of butts in seats   We stream live every Friday at about 315p Eastern/1215p Pacific and you can listen to everything we record over at AVSuperFriends.com    ▀▄▀▄▀ CONTACT LINKS ▀▄▀▄▀ ► Website: https://www.avsuperfriends.com ► Twitter: https://twitter.com/avsuperfriends ► LinkedIn: https://www.linkedin.com/company/avsuperfriends ► YouTube: https://www.youtube.com/@avsuperfriends ► Bluesky: https://bsky.app/profile/avsuperfriends.bsky.social ► Email: mailbag@avsuperfriends.com ► RSS: https://avsuperfriends.libsyn.com/rss Donate to AVSF: https://www.avsuperfriends.com/support

Ask a Jew
The Jewish Conspiracy to Change Yassine Meskhout's Mind

Ask a Jew

Play Episode Listen Later Jan 28, 2026 75:37


Yassine Meskhout is a public defender, a Moroccan, an anarchist, a liberterian, a cyclist (*shudder*) and now, our new best friend. We've become obsessed with his writing since his viral November 2023 essay “The Jewish Conspiracy To Change My Mind” and the banger, “Why I write About the Jews”. We talk to Yassine about working with criminals, trying to figure out why some of his friends were celebrating on October 7th, why ICE shouldn't shoot Mandy Patinkin, and how his father, who believes Israel created ISIS, also believes the Jews should have their own state.Check out substack for a fun Israeli-Moroccan inspired Spotify Playlist!Also:* First things first - would an ex-Jew fool Hitler?* Escaping the lack of McDonalds in Morocco through the diversity lottery.* DJ Smack-that-ass, Esq.* The life of a public defender - eleven magic words.* The system works, mostly.* Minnesota, ICE, and what expectations we should have from law enforcement.* Trying to understand the celebrations of October 7th.* Doubling down on Islam as a kid* This amazing story about Yassine's father:* Western leftists - please find meaning elsewhere.* The hard truth Jews need to hear - not everyone is an antisemite, some of them are just stupid.* How to protest well.* Everyone needs to stop being so confident.* Leave a white woman, take a white woman. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askajew.substack.com/subscribe

The Motherhood Anthology Podcast: Photography Education for a Business You Love
Episode 159: Family Photography through an Editorial Lens

The Motherhood Anthology Podcast: Photography Education for a Business You Love

Play Episode Listen Later Jan 27, 2026 46:07


What does it mean to bring an editorial approach to family photography? In this episode, Kim and Ali sit down with Katie Ward, a New York-based photographer whose work has been featured in Vogue, Vice, and the Jewish Museum. Katie shares how she transitioned from corporate video production to full-time photography after an unexpected layoff—and how she built a thriving business that prioritizes creative fulfillment over volume. If you've ever wondered how to stand out in a crowded market or whether you really need social media to succeed, this conversation will challenge what you thought was possible. Topics Covered: How to build a portfolio from scratch Doubling your prices without losing your business Creating a client experience beyond the camera Running a profitable photography business without social media Finding your unique editorial voice Connect with Katie: https://www.katie-ward.com Check out Picture Perfect Rankings: Group Coaching: ⁠⁠⁠https://pictureperfectrankings.com/found-booked/⁠⁠⁠ Learn more: ⁠⁠⁠https://pictureperfectrankings.com/⁠ Connect with The Motherhood Anthology Join TMA! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Enrollment link ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- https://themotherhoodanthology.com/photography-mentoring/ Connect with TMA: Website | Membership | Courses:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.themotherhoodanthology.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Free Community:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.facebook.com/groups/themotherhoodanthology ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Our Instagram:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠instagram.com/themotherhoodanthology⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Connect with Kim: Site:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://kimbox.com ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ IG⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/kimbox 

WSKY The Bob Rose Show
Full Show: Raging ‘nutbaggery'

WSKY The Bob Rose Show

Play Episode Listen Later Jan 27, 2026 132:00


Pres. Trump is reaching out to solve the deadly ICE chaos in Minneapolis. Eliminating the violence, and getting criminal immigrant deportations back on track is the goal. Meanwhile, left wing crackpots are overplaying their hand. Doubling down on crazy, by invading churches, rioting at hotels housing federal agents, and preventing the removal of truly dangerous child predators, gang members, and drug lords. Perspective and commentary on all the morning's biggest news stories for 1-27-26.

Monument Techno Podcast
MNMT Recordings : Spekki Webu & Mental — The Seed, Mo:Dem Festival 2025

Monument Techno Podcast

Play Episode Listen Later Jan 26, 2026 178:23


“You never know what the other is going to play and that adds a lot of adrenaline. Then you can create something unique, a live patchwork, an improvised mosaic. Doubling the incertitude and the variables, you can definitely have more fun during the performance.” This mix, recorded live at Seed Stage at Mo:Dem Festival captures that exact friction, the moment where two distinct musical minds collide to form something entirely new. Across the three hours, the story unfolds with unconventional rhythms, experimental grooves and deep cerebral soundscapes that defy easy categorization. Spontaneous intuition evolves into a singular flow, where Spekki Webu's futuristic electronics meets the detailed, shifting layers of Mental. Follow https://soundcloud.com/spekkiwebu https://www.instagram.com/spekkiwebu.mirrorzone https://soundcloud.com/marcomental https://www.facebook.com/marcomental/

B2B Better
How AlleyOop Scaled to 1.2M LinkedIn Followers And Turned It Into Revenue | Gabe Lullo, CEO of AlleyOop

B2B Better

Play Episode Listen Later Jan 26, 2026 20:19


Most companies treat LinkedIn like a megaphone. AlleyOop turned it into a reality show. In this episode, host Jason Bradwell sits down with Gabe Lullo, CEO of AlleyOop, to unpack how his sales development agency scaled from 25,000 to over 1.2 million LinkedIn followers by empowering employees to build their personal brands, not pushing a corporate page. Gabe breaks down the playbook: hiring people who want to be on camera, building an in-house media team, running internal podcasts that never get published, and tying content performance directly to commission. This isn't theory, it's a proven system filling enterprise calendars with qualified meetings. Jason and Gabe dive into AlleyOop's 16-year evolution from traditional outbound to organic LinkedIn content. The real insight? Gabe stopped caring about the company page and focused entirely on employee personal brands. They aggregated all employee profiles (originally 25K followers, now 1.2M) and turned their team into documentary subjects. Employees aren't forced to post, but those who participate get full support: professional video editing, copywriting, and a content calendar. Gabe walks through hiring - candidates now submit sample social posts during interviews and how they set people up for success. They run internal podcast-style interviews, chop them into posts, send them to copywriters for frameworks, then hand them back to employees to personalise. The feedback loop is built around incentives. Sellers get more leads (more commission). Recruiters attract more candidates (more placements, more money). Everyone's financially tied to content performance, so buy-in is organic. Gabe measures success not just by impressions, but by whether prospects recognise team members before demos, cutting 60% of the typical sales pitch. Jason asks about the CEO fear: won't employees get poached if we build their brands? Gabe's answer: people leave anyway. AlleyOop actually built a business model around clients hiring their reps and gets paid when it happens. Companies trying to poach probably aren't investing in teams like AlleyOop does, so culture becomes retention. Looking to 2026, Gabe's taking the human-first approach from the feed into DMs. LinkedIn's becoming the new email inbox (buried in automation), so they're building tools for real one-on-one conversations that convert faster. If you're trying to activate your team on LinkedIn without it feeling forced, this episode is your blueprint. Gabe proves you can build a scalable, revenue-driving content engine by supporting people instead of controlling them. Whether you're in sales development, professional services, or any people-first business, these principles will transform how you think about employee advocacy. 00:00 - Introduction: BDR as a service and people-first growth 02:00 - AlleyOop's 16-year evolution and go-to-market 04:30 - Doubling down on LinkedIn content 3-4 years ago 07:00 - From 25K to 1.2M followers: the aggregation strategy 10:00 - Hiring for content: asking candidates for sample posts 13:00 - Setting employees up for success: the in-house media team 16:00 - Internal podcasts, videographers, and copywriters 19:00 - Feedback loops: 70/30 business vs personal content 22:00 - Tying content to commission: financial buy-in 25:00 - Measuring success: recognition before demos 28:00 - Overcoming the "they'll get poached" objection 31:00 - 2026 strategy: taking conversations into DMs 34:00 - Where to find Gabe and AlleyOop Connect with Jason Bradwell on LinkedIn Connect with Gabe Lullo on LinkedIn Subscribe to Do Hard Things Podcast on Apple Podcasts Visit AlleyOop's official site Explore B2B Better website and the Pipe Dream podcast

StarDate Podcast
Doubling Up

StarDate Podcast

Play Episode Listen Later Jan 23, 2026 2:14


There just aren’t enough superlatives to describe the galaxy OJ 287. It’s a quasar – an especially bright object powered by two supermassive black holes. One of them is about 150 million times as massive as the Sun. The other is 18 billion times the Sun’s mass – one of the heaviest black holes yet seen. They team up to produce outbursts that are a trillion times brighter than the Sun – brighter than all the stars in the Milky Way Galaxy combined. OJ 287 is always bright. But every few years, it flares up – the result of interactions between the black holes. Each of them is encircled by a giant disk of gas. As the gas spirals in, it gets extremely hot. That makes the disks extremely bright. The smaller black hole orbits the larger one every 12 years. The orbit is tilted. So every six years, the black hole plunges through the disk around the larger black hole. That can heat some regions to trillions of degrees, producing the flare-ups. Astronomers recently used radio telescopes to take a picture of the system. They saw a long “jet” of particles from the smaller black hole. The jet is twisted by the interactions between the black holes – confirming the profile of this amazing system. OJ 287 is in Cancer, which is low in the east at nightfall. Even though it’s billions of light-years away, OJ 287 is bright enough to see through most amateur telescopes. Script by Damond Benningfield

The Successful Contractor Podcast
Why Daikin Bet Big on R-32 (And What It Means for Your Business)

The Successful Contractor Podcast

Play Episode Listen Later Jan 23, 2026 39:05


Book a free strategy call to see how we can help you hit your goals and beyond: https://bit.ly/3TvGiNW or call us at: (214)-453-1591  Grab our FREE resource: The Foundation Series, Real strategies to build a business that runs (and grows) without chaos: https://bit.ly/3Yqzow5  --------------------------------------------------------------------------------  What if the refrigerant decision you make today determines whether you're ahead of the curve—or scrambling to catch up five years from now?  Ben Middleton, National Sales Training Manager for Goodman, Amana, and Daikin, has spent nine years helping contractors navigate the biggest shifts in HVAC. In this episode of The Successful Contractor, Ben breaks down why Daikin chose R-32 over 454B, what that means for your business, and why the contractors who embrace change now will dominate their markets.  But this conversation goes far beyond refrigerants. Ben shares how Daikin is revolutionizing HVAC training with virtual reality—yes, actual VR headsets that eliminate distractions and boost retention. He reveals a game-changing rebate tool that finds all 2,000+ utility programs and files the paperwork for you. And he delivers a powerful message about thriving in what he calls the era of 'perma-uncertainty.'  What You'll Learn in This Episode:  •   Why Daikin chose R-32 over 454B—and the real-world benefits contractors are seeing (smaller units, lower cost, proven since 2012)  •   How to use Daikin's free VR training to get new techs job-ready faster  •   The Daikin Energy Rebate Center—a tool that finds all available rebates and files the paperwork for homeowners  •   Why the $2,000 IRA heat pump rebate is going away—and how to use that urgency to close more sales NOW  •   The financing shift: why 15-year loans are now beating 0% interest (and what it means about affordability)  •   Ben's 'perma-uncertainty' framework—why NOW is the time to innovate, test, and push the envelope  •   Why Ben says 'the days of the cube-style unit are numbered'—and how to position yourself as the innovator in your market  •   Critical advice: have 3-6 months of operating capital in the bank  Ben also hosts his own podcast, Accelerated HVAC Success, where he brings in vendors, contractors, and product managers to share what's working right now. Check it out on YouTube: https://www.youtube.com/@AcceleratedHVACSuccess  Resources Mentioned: •   R32Reasons.com - Learn why Daikin chose R-32 •   HVAC Learning Campus - Free on-demand training and VR simulations •   Daikin Energy Rebate Center - Find and file utility rebates automatically  Whether you're evaluating refrigerant options, looking for better training tools, or trying to figure out how to close more sales in a tight economy—this episode delivers actionable insights you can use today.  Watch now on YouTube or listen on your favorite podcast platform. And don't forget to subscribe to The Successful Contractor for more interviews that move the needle.  Chapters:  00:00 - Meet Ben Middleton, National Sales Training Manager at Daikin 02:15 - Time Is the Biggest Barrier to Training 06:28 - Ben's Journey: From Aeronautical Engineering to HVAC Training 09:45 - The Ripple Effect of Great Training 11:41 - Accelerated HVAC Success Podcast 14:36 - Why Daikin Chose R-32 Over 454B 18:30 - R-32 Is Proven Since 2012 19:00 - Market Response to R-32 21:55 - The Financing Shift: 15-Year Loans vs 0% Interest 23:26 - Daikin's Core Value: Absolute Credibility 26:37 - The Days of the Cube-Style Unit Are Numbered 27:50 - Daikin's Training Resources & HVAC Learning Campus 30:50 - The Daikin Energy Rebate Center 31:20 - The $2,000 Heat Pump Rebate Is Going Away 33:49 - VR Training: The Magic School Bus for HVAC 37:15 - Why Retention Is Higher in VR 40:13 - Perma-Uncertainty = Innovation Opportunity  Show Notes  The Successful Contractor Podcast is a part of the CertainPath family. CertainPath builds successful home service businesses—and has for 25 years. We do it by providing contractors with a proven path to success, professional coaching, software solutions, and a member community of 1,200+ strong. Doubling your sales, with a 20% net profit, and an inspiring company culture is ALL possible. Let us show you the way. With CertainPath, Success is Made Certain. Visit www.mycertainpath.com for more information.  FOLLOW CERTAINPATH:  Facebook: https://www.facebook.com/CertainPath LinkedIn: https://www.linkedin.com/company/certainpath Instagram: https://www.instagram.com/certainpath/ 

Heather du Plessis-Allan Drive
Steve Watt: Police Association President on police doubling their recruitment spending

Heather du Plessis-Allan Drive

Play Episode Listen Later Jan 21, 2026 2:29 Transcription Available


There's doubts police are getting bang for their buck on recruitment spending. The Post reports police spent just over $1 million to market recruitment - between December 2023 and the following November. That surged to $2.76 million in the year after Richard Chambers took over as Police Commissioner. Police Association President Steve Watt says all organisations have to spend to make themselves attractive. "But when we're still struggling to get the 500 officers target, you'd have to wonder if that was money well spent." LISTEN ABOVESee omnystudio.com/listener for privacy information.

The Mid-Career GPS Podcast
332: Settling for “Fine” at Mid-Career: Why Silence Is Costing You Influence, Pay, and Opportunity

The Mid-Career GPS Podcast

Play Episode Listen Later Jan 20, 2026 22:03 Transcription Available


Send us a text“Fine” sounds harmless, but at mid-career it is often a warning light.In this episode, I unpack why quiet professionalism can be misread as low ambition and how being seen as reliable can quietly turn into being perceived as replaceable. If you are job-hugging for stability, I help you distinguish between a smart season of skill-building and a slow slide into stagnation, and then show you how to course-correct with intention.We'll explore the psychological cost of settling for fine, including overfunctioning to stay relevant, growing resentment, and the internal narrative that you should just be grateful. I walk you through how to name your genius, the specific value others rely on but rarely articulate, and how to put that value to work in rooms where decisions are actually made.You will learn a practical approach to brand stewardship at mid-career. That means protecting your brand so leaders, peers, and stakeholders clearly understand your value, and promoting your brand so your impact is remembered when stretch assignments, promotions, and high-visibility projects are on the table.I cover the early warning signs of a career plateau, including generic performance reviews, smaller bonuses, and fewer strategic invitations, along with specific actions you can take to regain momentum.Mid-career is your wealth-building window. Doubling down on what you do exceptionally well is not bragging. It is stewardship of your impact, influence, and earning power.If “fine” has become your default answer, treat it as a signal to act. Listen in, take the scripts and steps I share, and choose one bold action this week to make your genius visible.If you found this conversation helpful, follow the podcast, share it with a colleague who needs the nudge, and leave a review so more mid-career professionals can find the show.Support the showVisit https://johnneral.com/resources to: Subscribe to my free leadership and career newsletter Get The Mid-Career Clarity Code to help you figure out whatever is next for you and your career Please leave a rating and review on Apple Podcasts here. Connect with John on LinkedIn here.Get John's New Mid-Career Journal on Amazon here. Follow John on Instagram @johnneralcoaching. Subscribe to John's YouTube Channel here.

Claims Game Podcast with Vince Perri
The Hidden Asset Destroying (or Doubling) Your Business Exit Value | Jason Bush

Claims Game Podcast with Vince Perri

Play Episode Listen Later Jan 20, 2026 41:34


In this insightful episode, host Vince Perry, Certified Exit Planning Advisor (CEPA) and Business Broker, sits down with strategic advisor Jason Bush of Linville Team Partners to break down one of the most overlooked components of business exits: commercial real estate. Jason is a CEPA who operates at the intersection of M&A, commercial real estate (CRE), and exit planning. In this conversation, he explains how business owners can significantly increase their total enterprise value by treating real estate as a strategic asset—not an afterthought—during the sale process. For many Main Street business owners, the majority of their net worth is tied up in the property. Yet in most M&A transactions, the analytical rigor is applied almost exclusively to the operating company, leaving the real estate undervalued and poorly structured. Jason's unique background—combining quantitative experience as a former Civil Engineer and Professional Engineer (PE), buy-side M&A exposure, and CRE advisory—gives him a rare perspective on how to properly align business and real estate strategy to maximize outcomes. In this episode, you'll learn: The Overlooked Asset: Why commercial real estate is often ignored in exit planning and how separating OpCo and PropCo can unlock significant net worth. The Undermarket Rent Trap: How failing to set market-based rent can destroy income-based valuations and limit wealth creation. Creating Optionality: How to structure flexibility so you can choose whether to sell, retain, or lease the real estate at exit. Maximizing Value After the Sale: Why selling the business first—especially to private equity—can increase real estate value by improving tenant credit quality and compressing cap rates. Lease Pitfalls: How month-to-month leases can drop a company's valuation floor to zero and why lease terms matter just as much as financials. If you're planning an exit—or even thinking about one—this episode will change how you view commercial real estate in your overall strategy.

Zero Wasted Days
Ep 53: Expansion Era - My Personal Blueprint for a Multi-Six-Figure Business in 2026 with Peace and Discernment

Zero Wasted Days

Play Episode Listen Later Jan 19, 2026 31:01


This episode of Zero Wasted Days is less a lesson — and more a re-introduction. Recorded in the heart of winter, this conversation marks the beginning of what I'm calling my Expansion Era: a season of growth led by peace, discernment, and grounded clarity, not urgency or constant seeking. In this episode, I share how I'm intentionally designing my business for another multi-six-figure year while working just three days a week, honouring long stretches of travel, family time, school holidays, and integration weeks. This is what a life-first business looks like in practice — not as a philosophy, but as a structure and an identity. We explore why so many ambitious women feel stuck in cycles of over-effort, overthinking, and content overwhelm — and why sustainable scale requires fewer decisions, cleaner pathways, and deeper focus. I also take you behind the scenes of how my work is evolving in 2026, including: • Doubling down on Life First CEO as my anchor ecosystem • Why one core offer creates stronger roots and more sustainable growth • How messaging clarity (not more content) creates momentum • Why tools, systems, and navigation matter as businesses scale • The role of embodied, in-person experiences like Elevate Immersion • What writing my book and stepping into speaking represents at this stage of leadership This episode is for women who are ready to stop chasing the next strategy — and start standing fully inside what they're building, with calm confidence and intention. ⸻

Uncensored CMO
Rory Sutherland on why luck beats logic in marketing

Uncensored CMO

Play Episode Listen Later Jan 14, 2026 58:06


Our most popular guest ever is back - Rory Sutherland returns for a wide-ranging conversation on why marketing works best when it embraces luck, spontaneity, and a little irrationality. From the dangers of confected outrage and self-censorship to the unfair economics of marketing, Rory challenges the industry's obsession with logic, optimisation, and process.We discuss why success is often misunderstood as skill rather than luck, the value of doing a few things irresponsibly, and why inefficiency can be a feature rather than a flaw. As ever, Rory connects behavioural science, creativity, and business reality in ways few others can.Timestamps00:00 - Intro01:16 - How Rory deals with his new micro fame03:12 - How Jon shut down the London Underground07:04 - The problem with confected outrage10:34 - How self-censoring is affecting creativity12:10 - The power of spontaneity and luck in advertising16:03 - The unfair economics of marketing20:54 - Is success just luck?23:12 - Spend 95% responsibly, and 5% irresponsibly30:12 - Doubling down on what your competitors do badly34:32 - Why so many businesses are no longer customer focused35:29 - Inefficiency as a feature37:08 - The power of herd mentality43:26 - What marketers can teach the business world48:13 - Why internal process is killing businesses51:15 - Lessons from 200 years of The Spectator advertising54:47 - Rory's closing thoughts on marketing

The Successful Contractor Podcast
How Contractors Grow Through the Hard Stuff | Brandon Marshall

The Successful Contractor Podcast

Play Episode Listen Later Jan 13, 2026 41:10


Negotiate Your Career Growth
Job Security in the Era of AI Frenzy: How to Discern Signal from the Noise

Negotiate Your Career Growth

Play Episode Listen Later Jan 13, 2026 21:22 Transcription Available


Earlier today on January 13, 2026, LinkedIn published an article The 2026 job market: What to know about hiring, confidence and opportunity. The data is mixed: U.S. job growth is sluggish and many job seekers are feeling stuck where they are but not prepared for the job hunt. Plus, with a bulk of the growth in the economy happening in the AI infrastructure space, a forward-thinking professional can't help but wonder, "Will I be replaced by AI?" I'm answering this question head-on in this episode. I talk about what AI is excellent for and what it can't do, and what it means for smart leaders planning ahead. What you'll learn: 00:00 – AI cannot fully replace human coaching and the value of human insight01:30 – A quick case study on uncovering human blind spots and parlaying them to build generative intelligence07:30 – The 2026 job market challenges and the push for AI-integrated skills12:00 – Importance of discerning wisdom and cognitive agility for future career success16:30 – Doubling down on humanity and bridge-building as a competitive edge18:30 – Reflective questions for listeners about uncertainty and growth through human conversationsText me your thoughts on this episode!Enjoy the show? Don't miss an episode, listen and subscribe via Apple Podcasts or Spotify. Leave me a review in Apple Podcasts. Connect with me Book a free hour-long consultation with me. You'll leave with your custom blueprint to confidence, and we'll ensure it's a slam-dunk fit for you before you commit to working with me 1:1. Connect with me on LinkedIn Email me at jamie@jamieleecoach.com

Wholesaling Inc with Brent Daniels
WIP 1905: LIVE Training - 90,000 Wholesalers Down to 12,000 (What This Means for You in 2026)

Wholesaling Inc with Brent Daniels

Play Episode Listen Later Jan 9, 2026 119:42


90,000 wholesalers entered the market and only 12,000 will survive.In this live training, Brent Daniels sits down with RJ Bates III to break down what's really happening in the wholesaling industry and what it will take to win in 2026 and beyond. They dive into why most wholesalers fail, the danger of chasing vanity metrics, and how staying disciplined, profitable, and “boring” can be the key to long-term wealth.From seller motivation and closing philosophy to marketing mistakes, scaling traps, and why manufacturing deals can destroy your business, this episode delivers hard truths every wholesaler needs to hear.If you want to stay in the game while others wash out follow the TTP Training Program for more.---------Show notes:(1:55) Beginning of today's episode(3:47) Why most wholesalers fail after early success(6:08) Wealth-building vs. chasing vanity metrics(7:41) The danger of scaling before your business is self-sustaining(8:41) Why “boring” marketing wins in 2026(13:10) Doubling down on what already works(14:25) Coachability as the key separator in wholesaling(18:55) Aggressive rehab, conservative ARV, and buffer strategies(20:22) Asking the right questions to uncover seller motivation(21:33) Why not every lead should get a creative finance offer(22:57) The danger of manufacturing deals(25:03) Confidence, honesty, and closing the right deals(27:55) Legal risks of wholesaling sub-to deals(29:20) Transparency, disclosure, and protecting your business(1:03:45) 70% of your income comes from the last 6 months(1:41:57) Getting leads just by swiping your credit card----------Resources:Connect with RJ on InstagramDealMachinePropStreamBatch LeadsTo speak with Brent or one of our other expert coaches call (281) 835-4201 or schedule your free discovery call here to learn about our mentorship programs and become part of the TribeGo to Wholesalingincgroup.com to become part of one of the fastest growing Facebook communities in the Wholesaling space. Get all of your burning Wholesaling questions answered, gain access to JV partnerships, and connect with other "success minded" Rhinos in the community.It's 100% free to join. The opportunities in this community are endless, what are you waiting for?

The Weekly Juice | Real Estate, Personal Finance, Investing
Why Scaling Too Early Almost Cost Us Everything | E351

The Weekly Juice | Real Estate, Personal Finance, Investing

Play Episode Listen Later Jan 7, 2026 54:38


The fastest way to kill momentum in business is scaling before you are ready. We learned that the hard way. For a long time, we thought growth meant doing more. Hiring faster. Adding systems. Spending money to buy speed. What we didn't realize was that piling more on top of a shaky foundation doesn't create momentum. It quietly bleeds it. This episode is a candid look back at what the past year actually taught us. Not the highlight reel, but the moments where things felt like they were moving forward until the numbers told a different story. We talk about where we scaled too early, the investments that didn't pay off, and how chasing growth almost cost us focus, profit, and clarity. We also break down what changed everything. Slowing down. Cutting complexity. Doubling down on what was already working instead of chasing the next shiny tactic. The real unlock wasn't more effort. It was better decisions. You'll hear how we're thinking about money, time, and energy heading into 2026, the filters we're using before making new investments, and how simplifying the business has created more leverage than any new system ever did. If you're building a business, investing in real estate, or trying to scale anything while juggling real life, this episode will help you spot where momentum leaks actually come from and how to fix them before they get expensive. Book your call with Neo Home Loanshttps://www.neoentrepreneurhomeloans.com/wealthjuice/ Book your mentorship discovery call with Cory RESOURCES

How to Trade Stocks and Options Podcast by 10minutestocktrader.com

Are you looking to save time, make money, and start winning with less risk? Then head to https://www.ovtlyr.com.This one is a wild but important lesson, especially if you trade options or have ever been tempted by “easy” premium. In this video, we break down how an options trader managed to blow up $50 million on Christmas Eve using short-term iron condors. It sounds insane, but once you see how it happened step by step, it starts to make uncomfortable sense.The strategy looked great on the surface. High win rate. Fast payouts. Trades that could make money if the market went up, down, or sideways. That's the hook. And it's exactly why so many traders get pulled into selling short-term options without fully understanding what's lurking underneath.What really caused the damage wasn't one bad trade. It was what happened after. Losses led to bigger size. Bigger size led to narrower profit windows. Volatility dropped. Risk quietly exploded. And by the time the market made a totally normal move, the account was already cornered with nowhere to go.This video walks through why win rate can be one of the most dangerous metrics in trading, especially when risk and reward are out of balance. You'll see how collecting small premiums while risking larger losses creates a slow-motion disaster. Everything looks fine until suddenly it isn't.We also dig into why martingale-style thinking is so dangerous in options. Doubling down feels logical when you believe the market “has to” revert. But options expire. Time runs out. And markets don't care what feels fair. When volatility compresses and you keep forcing trades, you're not increasing your odds. You're just speeding up the inevitable.Here are some of the big takeaways covered in this breakdown:✅ Why a high win rate can still lead to massive losses✅ How iron condors quietly become more dangerous as volatility drops✅ The hidden risk of selling premium in calm markets✅ Why martingale position sizing fails in real markets✅ How small, normal market moves can wipe out oversized option positionsOne of the most important themes in this video is restraint. Sometimes the smartest move is doing nothing. Sitting in cash isn't boring. It's disciplined. A lot of traders blow up not because they're reckless, but because they feel pressured to always be in a trade, even when conditions are stacked against them.This isn't about dunking on anyone. It's about learning from a very expensive mistake so you don't have to repeat it. If you trade options, or you're thinking about selling premium because it looks “safe,” this is a must-watch lesson in how risk actually behaves.If this helps you slow down, rethink position sizing, or avoid forcing trades when volatility is low, then it's done its job. Share it with someone who needs to hear this before the market teaches the lesson the hard way.Gain instant access to the AI-powered tools and behavioral insights top traders use to spot big moves before the crowd. Start trading smarter today

The CharacterStrong Podcast
Top 6 of 2025: Doubling Tier 1 Usage: Building Stronger Classrooms Through Character, Connection, and Data - Amy Fairchild & Crystal Hooper

The CharacterStrong Podcast

Play Episode Listen Later Jan 2, 2026 21:14


Learn More About CharacterStrong:  Access FREE MTSS Curriculum Samples Request a Quote Today! Learn more about CharacterStrong Implementation Support Visit the CharacterStrong Website

Slappin' Glass Podcast
John Andrzejek on Scrambling vs. Anti-Scrambling Defensive Systems, Doubling the Post, and PNR Cutting Actions {Campbell}

Slappin' Glass Podcast

Play Episode Listen Later Jan 2, 2026 60:32


In this episode of Slappin' Glass, we're joined by John Andrzejek, Head Coach at Campbell and former defensive coordinator for Florida's national championship team, for a deep dive into the real trade-offs that shape elite defensive systems.Coach Andrzejek walks us through how his defensive philosophy has evolved across stops at St. Mary's, Columbia, Washington State, Florida, and now Campbell—highlighting the tension every staff must navigate between precision and pragmatism, technique and energy, and staying out of trouble versus thriving inside the scramble.We explore the decision-making behind scrambling vs. anti-scrambling defenses, how and why he blends principles from St. Mary's, Houston, and Iowa State, and what it truly takes to guard the modern, spacing-driven game. The conversation gets deep into the weeds on no-middle principles, switching high and low, tagging schemes in middle pick-and-roll, and organizing rotations when things inevitably break down.Offensively, Coach Andrzejek shares how he teaches cutting around the pick-and-roll through a mix of rules and reads, why simplicity drives better decision-making, and how repetition of core situations builds true situational awareness. We also tackle post-doubling philosophies, personnel adjustments, practice design, and the balance between scouting detail and playing fast.As always, we close with a Start, Sub, or Sit that dives into cutting around the pick-and-roll and post-doubling strategies, plus Coach Andrzejek's thoughts on the best investment he's made in his coaching career.This is a clinic-level conversation on defensive problem-solving, offensive clarity, and building systems that hold up against elite talent.What You'll LearnThe strategic trade-offs between scrambling vs. anti-scrambling defensive systemsHow elite programs blend no-middle principles with modern spacing realitiesWhy playing really hard often matters more than perfect techniqueHow to organize rotations and tags when the ball gets to the middleSwitching high and low to keep the ball out of the paintTeaching cutting around the pick-and-roll using rules that unlock readsWhy offensive simplicity leads to better decision-makingDifferent philosophies for doubling the post and protecting the rimHow practice design, film, and repetition build defensive awarenessThe long-term value of film study and coaching mentorshipTo join coaches and championship winning staffs from the NBA to High School from over 60 different countries taking advantage of an SG Plus membership, visit HERE!

Above Board with CandorPath
2025 Wrapped: Money, Life & Purpose

Above Board with CandorPath

Play Episode Listen Later Dec 31, 2025 8:42


This episode is our version of a year-end wrap. We look back on the biggest themes from 2025, from money and markets to mindset, purpose, and growth, and reflect on what actually mattered most. It's a highlight reel of the conversations that shaped the year, plus a look ahead at what's coming next. If you've been with us all year (or you're just jumping in), this one ties it all together. Thanks for spending this year with us!   Past Episodes Referenced in This Wrap-Up: - New Year, New Disciplines (EP 0145) – Kicking off 2025 with a focus on clarity, intention, and what deserves your energy - Everything Everywhere Fallacy (EP 0147) with Mitch Matthews – Why “being one place well” matters more than doing everything at once - Part 1: I'm Learning to "Let Them" (EP 0155) – Learning to release control and stop managing outcomes - Part 2: I'm Learning to "Let Me" (EP 0156) – Doubling down on what you can control - Loss Aversion: Losing Hurts More Than Winning Feels Good (EP 0162) – Understanding emotional decision-making during market uncertainty - Your Passion Doesn't Retire with Brandon Hatcher (EP 0174) – Why purpose doesn't end when your career does 00:00 Welcome to the Above Board Podcast Year-End Recap 00:09 Reflecting on 2025: Achievements and Inspirations 01:08 Key Themes of 2025: Financial Clarity and Personal Growth 01:31 Impactful Conversations: Focus and the Everything Everywhere Fallacy 02:44 Q2 Highlights: Embracing the Let Them Theory 04:30 Q3 Insights: Navigating the Financial Markets 05:37 Q4 Focus: People, Purpose, and Fulfillment 07:37 Looking Ahead to 2026: Exciting Plans and Gratitude

Group Dentistry Now Show: The Voice of the DSO Industry
Looking Ahead: The Future of Group Dentistry with the AADGP & Lavender Dental Group

Group Dentistry Now Show: The Voice of the DSO Industry

Play Episode Listen Later Dec 22, 2025 32:36


Dr. Mark Sivers, Partner of Aligned Dental, Callie Elmore, the Executive Director of the AADGP, and Dan Redifer, CEO of Lavender Dental Group discuss: The February 4-6 AADGP event in Austin, TX Lavender Dental Group's story Doubling down on collaboration & networking Use code gdnow26 save 25% at https://www.aadgp.org/aadgp2026/ Learn more about the AADGP here - https://www.aadgp.org/ Learn about Lavender Dental Group here - https://www.lavenderdental.com/    

We Not Me
How do you lead a team through economic uncertainty?

We Not Me

Play Episode Listen Later Dec 19, 2025 19:43


Leading through uncertainty means accepting complexity rather than fighting it. The most powerful tool for doing so is clarity.While conventional wisdom suggests focusing on trust-building and communication skills, Squadify data shows that starting with clarity – specifically around shared goals, processes, and measures of success – is what actually transforms groups of individuals into cohesive teams and drives performance.Three reasons to listenLearn to befriend uncertainty and focus on what you can influenceDiscover how to build team cohesion through clarity rather than trust exercisesUnderstand how teams work together as the key performance driverEpisode highlights[00:02:31] Jamie's question[00:03:36] Befriending uncertainty[00:05:38] Are you a team, or a TINO?[00:08:51] The sixth dysfunction in teams[00:11:12] The trigger question for high performance[00:13:42] Doubling down on humanity[00:16:41] Coming up in 2026LinksTrack and improve your team performance with SquadifyLeave us a voice note

How to Trade Stocks and Options Podcast by 10minutestocktrader.com
No Santa Rally⁉️ Is this the Beginning of the END⁉️

How to Trade Stocks and Options Podcast by 10minutestocktrader.com

Play Episode Listen Later Dec 18, 2025 40:06


Are you looking to save time, make money, and start winning with less risk? Then head to https://www.ovtlyr.com.It was one of those market days that makes people panic. Big red candles, ugly price action, and nonstop noise about what is “supposed” to happen next. But this video is about why those days do not need to control your emotions, your decisions, or your account when you actually have a plan.This session walks through what separates reactive traders from disciplined ones. When the market is down hard, most people feel pressure to act, predict, or fix something. What you see here is a very different approach. Instead of guessing bottoms or averaging down blindly, the focus stays on trend structure, signals, expectancy, and risk control. When you know exactly what you are waiting for, the stress drops dramatically and the decision-making becomes simple.A big theme in this video is psychology. Watching price move without a framework feels chaotic. Once you understand trends, EMAs, ATR, and why exits matter just as much as entries, the market stops feeling random. You do not need to stare at charts all day. You do not need to babysit positions. You just follow the plan and let the data do the heavy lifting.There is also a deep dive into why prediction-based trading fails so many people. Doubling down, hoping, or assuming something has to bounce eventually sounds logical until you realize you never know how far a move can go. This video breaks down why waiting for confirmation puts you in profitable trends faster, with less time underwater and far less emotional damage.Midway through, the discussion shifts into real backtesting data. This is where things get interesting. Instead of “trust me” opinions, you see how expectancy, win rate, ATR behavior, and Monte Carlo testing actually work together. You also learn why some sectors are avoided completely, even if they look tempting, and how frequency and risk-adjusted returns matter more than chasing home runs.Key ideas covered in this video include:✅ Why red days are only scary when you do not have a plan✅ How trend signals remove emotion from trading decisions✅ The role of ATR in defining risk, exits, and expectations✅ Why random trade selection can still work when expectancy is positive✅ How V-shaped recoveries waste time compared to trend confirmationThe bigger takeaway is simple. Trading does not have to be stressful, dramatic, or chaotic. When your rules are clear and tested, you already know what you will do before price ever moves. That clarity is what allows traders using the OVTLYR approach to stay calm while everyone else is freaking out.If you want a calmer, more repeatable way to approach the market that is grounded in data instead of hype, this video is worth your time.Gain instant access to the AI-powered tools and behavioral insights top traders use to spot big moves before the crowd. Start trading smarter today

Strength Changes Everything
Workout and Recovery Secrets That Actually Work

Strength Changes Everything

Play Episode Listen Later Dec 16, 2025 23:42


Are you sabotaging your strength gains without realizing it? Amy Hudson and Dr. James Fisher continue the Series on the Principles of Exercise Design. In today's episode, they break down the concept of inroading, explain how every workout triggers both fatigue and adaptation, and reveal why recovery is just as important as effort. They cover how to maximize strength gains, avoid plateaus, optimize training frequency, and use your body's natural recovery cycle to build lasting progress. Dr. Fisher explains how inroading works. It's the immediate fatigue you feel when a muscle is pushed to true effort. That short-term drop in performance is exactly what triggers long-term adaptation. Dr. Fisher highlights why you always feel weaker at the end of a workout. The workout itself isn't where strength appears; it's where the demand for strength is created. Your body waits until you're resting to build the improvements that lead to more strength. Amy reveals why inroading is such an important part of strength training. It lets you reach the deeper layers of muscle fibers that light, easy reps never touch. And once you can reach those fibers consistently, your long-term progress becomes far more predictable. Dr. Fisher explains the two phases every workout goes through. First, you feel the immediate drop in energy and strength, and that part happens instantly. The second part, the repair phase, is quiet, slow, and where all the positive changes take place. Dr. Fisher highlights the problem with insufficient recovery. Dr. Fisher explains how strength gains come from a simple pattern. You give your body a clear challenge, then you get out of the way long enough for it to respond. When that cycle isn't interrupted, your progress becomes steady and consistent. Amy covers how long most people need to recover from a hard session. For many, that window sits somewhere between 24 and 48 hours, especially after real effort. That's why back-to-back strength days tend to do more harm than good. What long-term research says about training frequency. Two workouts a week hits the sweet spot where your body gets enough stimulus but still has room to recover. You can grow with once-a-week sessions too, but going past two rarely adds any new benefit. Dr. Fisher explains how outside stress affects your progress in the gym.  Poor sleep, emotional strain, or a stressful week at work drains the same energy your workouts require.  Amy covers why the best personal trainers pay close attention to recovery when designing a strength plan. They know the workout is only half the story, and the real improvements show up when your body has time to adapt.  Dr. Fisher highlights why consistency wins out over intensity. Showing up twice a week across months and years outperforms short bursts of extreme effort followed by burnout. Amy explains what actually happens after a workout ends.  The session challenges your muscles, but the growth happens later, when you're resting and not even thinking about the gym. If recovery is high-quality, every return session should feel just a bit stronger than the last. Dr. Fisher covers why extra sets aren't the secret to growth. Once every muscle fiber has been recruited, more work doesn't add more stimulus; it only adds more fatigue. And that extra fatigue delays the recovery you depend on for strength gains. Dr. Fisher explains why doing more exercise isn't the same as doing better exercise.  According to Dr. Fisher, making up for missed workouts is a trap. Doubling your workload because you skipped a session only leaves you sore, tired, and drained for days afterward. Learn why simple, focused workouts beat complicated ones. A handful of well-chosen exercises taken to meaningful effort provide everything your body needs. Once that stimulus is delivered, more volume just becomes noise. Amy covers the repeating cycle behind effective strength training. You challenge the muscle, you give it space to rebuild, and then you return slightly more capable than before.  Dr. Fisher explains how a good personal trainer will use inroading to push you just enough for growth. It's not about doing more work than necessary, but hitting the right intensity so your muscles are challenged. Then, with proper recovery, each session builds on the last, and progress becomes consistent. Dr. Fisher explains supercompensation in a way that actually makes sense. A hard workout drives your performance slightly below normal, but recovery lifts you above that normal line once the repair is done. And that rise above baseline is where the gains hide. Dr. Fisher highlights what it really means to train smarter. You put in the right amount of effort, protect your recovery, and let those small improvements stack up. Over time, that balance takes you much further than grinding endlessly in the gym.     Mentioned in This Episode: The Exercise Coach - Get 2 Free Sessions! Submit your questions at StrengthChangesEverything.com     This podcast and blog are provided to you for entertainment and informational purposes only. By accessing either, you agree that neither constitute medical advice nor should they be substituted for professional medical advice or care. Use of this podcast or blog to treat any medical condition is strictly prohibited. Consult your physician for any medical condition you may be having. In no event will any podcast or blog hosts, guests, or contributors, Exercise Coach USA, LLC, Gymbot LLC, any subsidiaries or affiliates of same, or any of their respective directors, officers, employees, or agents, be responsible for any injury, loss, or damage to you or others due to any podcast or blog content.

Top Contractor School - The Podcast
Passing Down the Business, Doubling Sales & Eliminating Blind Spots w/ Doug Brown | Top Contractor School Podcast

Top Contractor School - The Podcast

Play Episode Listen Later Dec 10, 2025 47:00


Welcome back to the Top Contractor School Podcast, where contractors come to grow stronger, scale smarter, and build businesses that last. In this episode, Eric Guy sits down with Doug Brown — CEO of CEO Sales Strategies, revenue expert, and the mind behind the Double Your Sales methodology. Doug has built, scaled, burned down, rebuilt, and advised 37 companies, including several in the contracting space. His sales systems have powered some of the world's top organizations, and today he brings that elite-level knowledge straight to the TCS community. From navigating family dynamics to building real leverage to mastering sales follow-up, Doug lays out a blueprint that every contractor needs to hear.

Contractor Cuts
From First-Time Attendee to Doubling Revenue: What One Retreat Did for His Business

Contractor Cuts

Play Episode Listen Later Dec 8, 2025 39:47 Transcription Available


Fred Mancuso walked away from a successful career as a Chef and built FM Contracting in Chicago, IL. This interview is about how he moved from using a paint brush as a side job to running a real general contracting business with systems, subs, and large projects. We discuss his experience at our Annual Planning Retreat last year and how what he learned lead to doubling his revenue in just a year.• moving from labor to leadership• learning to leverage subcontractors• building systems for estimates and agreements• scheduling site time with intent• creating a predictable pipeline• preparing for first project manager hire• targeting consistent six-figure months• applying for a Chicago GC license• doubling revenue through discipline• the value of community and coachingIf you want to come hang out with Freddie and Clark, we'd love to see you guys at the retreat in January. Go to ProStruct360.com/annual-growth-retreat/ to sign up.Join us January 11–13 in Nashville for the Chart the Course 2026 Planning Retreat. Sign up now and get three free coaching sessions before the event to finish 2025 strong and hit 2026 with a clear game plan. At the retreat, you'll tackle systems, hiring, marketing, and leadership alongside ambitious contractors, leaving with a blueprint for growth. Spots are limited—visit prostruct360.com to learn more!Have a question or an idea to improve the podcast? Email us at team@prostruct360.com Want to learn more about our software or coaching? Visit our website at ProStruct360.com

Financial Advisor Success
Ep 466: Doubling AUM From Under $300M To Over $600M In 3.5 Years By Making Investments To Systematize For Scale with Morgan Nichols

Financial Advisor Success

Play Episode Listen Later Dec 2, 2025 89:59


Growth rarely comes without sacrifice—and for many advisory firm owners, that means making bold investments that might pinch margins today to create momentum for tomorrow. This episode explores how strategic hiring, branding, and compensation design can help accelerate growth while building a stronger, more scalable team. Morgan Nichols is the CEO of LifeBranch Wealth Partners, an independent broker-dealer practice based in Grapevine, Texas, overseeing $630 million in AUM for 830 households. Listen in as Morgan shares how she doubled her firm's size in just three and a half years in part by investing ahead of the curve, from hiring before capacity hits a breaking point to rebranding her firm to reflect a broader, growth-oriented vision. You'll learn how she designed clear career paths and compensation models that balance stability with opportunity, why she added a dedicated business development director to fuel new growth, and what helped her stay resilient through challenging growth stages.  For show notes and more visit: https://www.kitces.com/466

Build Your Network
Make Money by Thinking of Your Older Self

Build Your Network

Play Episode Listen Later Dec 2, 2025 20:24


In this solo-style episode, Travis sits down with his producer Eric for an introspective conversation using the “rocking chair test,” a thought experiment where you imagine advice from your much older self at the end of your life. Instead of featuring an external guest, this episode turns the spotlight on Travis's own mindset around regret, failure, and what he wishes he'd stop (and keep) doing in business and life. On this episode we talk about: How the “rocking chair test” helps you see your current life from the perspective of your 88–108-year-old self Why most people regret what they didn't do more than what they did “wrong” Travis's tendency to downplay wins, fixate on failures, and how that has held him back How his upbringing and internalized beliefs still shape his self-talk and response to criticism The story of building a 7-figure business, losing it, and struggling to own his successes afterward Why podcasting is the thing he's most proud he stuck with, despite years of low direct financial return The tension between chasing novelty and doubling down on what you're genuinely great at Using your future self as a filter to judge fears about embarrassment, judgment, and starting over How meditating on death and limited time can create urgency to take bigger swings now Top 3 Takeaways Your future self will almost always regret the risks you never took more than the attempts that didn't work out. Fixating on failures while dismissing your wins can quietly cap your potential, even when you've built real results. Doubling down on your core strengths for income, while using side projects to satisfy curiosity and novelty, is a powerful way to build both wealth and fulfillment. Notable Quotes "I tend to significantly downplay the successes I've had and highlight the failures I've had, and that's held me back in a lot of ways." "People on their deathbed usually regret the things they didn't do far more than the things they did that didn't work out." "Life is short, and in 200 years none of this will matter—so why not just go for it and take another big swing?" ✖️✖️✖️✖️

Pop Culture Retro Podcast
Pop Culture Retro interview with pop culture icon, Morgan Fairchild!

Pop Culture Retro Podcast

Play Episode Listen Later Nov 21, 2025 59:13


Send us a textJoin director and former child actor Moosie Drier, and author Jonathan Rosen, as they chat with pop culture icon, Morgan Fairchild!Morgan discusses her roles in such series as Falcon's Crest, Dallas, and Flamingo Road. Doubling for Faye Dunaway in Bonnie and Clyde, acting with legends such as Lloyd Bridges, George Segal, and Roddy McDowall. Her new podcast, which she does with her sister, Two Bitches from Texas, and much more!Support the show

Beyond Yacht Rock
135. DE-DOUBLING USE YOUR ILLUSION

Beyond Yacht Rock

Play Episode Listen Later Nov 13, 2025 98:26


Every rock fan of a certain age has only one dream: a perfect single-album cutdown of Guns N' Roses' Use Your Illusion I & II. Today the guys take their shot.