Podcasts about adapting

  • 9,993PODCASTS
  • 15,161EPISODES
  • 39mAVG DURATION
  • 2DAILY NEW EPISODES
  • Feb 17, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




Best podcasts about adapting

Show all podcasts related to adapting

Latest podcast episodes about adapting

Working Class Audio
WCA #583 with Richard Chycki Part 1– Networking, Finances, ATMOS, Learning from Major Artists, and Adapting to Technological Changes in Audio

Working Class Audio

Play Episode Listen Later Feb 17, 2026 75:43


Matt is joined by multi-platinum mixer and engineer Richard Chycki whose clients include such rock royalty as Rush, Aerosmith, Dream Theater, Skillet, Mick Jagger, Alice Cooper, Pink and many more.In This Episode, We Discuss:NAMM ExperienceCurrent State of Immersive AudioMoving to Nashville: Reasons and PlansEarly Musical Journey and Transition to EngineeringThe Shift from Musician to EngineerAdapting to Technological Changes in AudioThe Future of Atmos and Immersive AudioArtist Reactions to Immersive MixingThe Evolution of Atmos TechnologyLearning from Major ArtistsNavigating the Music IndustryFinancial Strategies in Music ProductionThe Importance of NetworkingMixing Classic Records in AtmosLinks and Show Notes:Rich's SiteMatt's Rant: The ExpensesCredits:Guest: Richard ChyckiHost/Engineer/Producer: Matt BoudreauEditing: Anne-Marie PleauWCA Theme Music: Cliff TruesdellThe Voice: Chuck Smith

Mind of a Football Coach
The Evolution of the Run and Shoot: Insights from Wayne Anderson

Mind of a Football Coach

Play Episode Listen Later Feb 17, 2026 50:22


In this episode, Coach Wayne Anderson shares his extensive experience in football coaching, particularly focusing on the run and shoot offense. He discusses the origins of the offense, its evolution, and how he adapts it to fit his coaching style. Coach Anderson emphasizes the importance of teaching route adjustments, balancing the run game with passing concepts, and the significance of special teams. He also reflects on his coaching philosophy, viewing it as a ministry opportunity to positively impact young athletes' lives beyond the field. Chapters 00:00 Introduction to Coach Wayne Anderson 02:00 The Origins of the Run and Shoot Offense 10:07 Adapting the Run and Shoot: Personal Touches 20:03 Teaching Route Adjustments in the Run and Shoot 29:50 The Run Game: Balancing Pass and Run Concepts 39:02 Coaching as a Ministry: Wayne's Philosophy Learn more about your ad choices. Visit megaphone.fm/adchoices

Design Curious | Interior Design Podcast, Interior Design Career, Interior Design School, Coaching
178 | 5 Lessons on Building an International Interior Design Career With Elliot James

Design Curious | Interior Design Podcast, Interior Design Career, Interior Design School, Coaching

Play Episode Listen Later Feb 16, 2026 24:43


What if the fear holding you back isn't failure—but the thought of never trying at all?In this episode, I sit down with Elliot James, founder of a multi–award-winning international interior architecture studio, to talk honestly about what it takes to build a creative career that spans countries, cultures, and markets. Elliot didn't follow a traditional path. He didn't wait until everything felt “safe.” Instead, he followed his curiosity, his ambition, and his passion for design—sometimes with nothing more than a laptop, a website, and a willingness to knock on doors.If you're an interior designer (or aspiring designer) who dreams of bigger projects, international opportunities, or breaking into luxury residential, hospitality design, wellness-focused environments, or commercial projects—but you're afraid of getting it wrong—this conversation is for you. We talk about persistence, risk-taking, networking, word-of-mouth referrals, and how adapting to different cultures can open doors you never knew existed.This episode is a reminder that creative careers aren't built by waiting. They're built by moving forward—one bold decision at a time.Featured GuestElliot James is the founder of Elliott James Interiors, a multi–award-winning international interior architecture studio specializing in luxury residential projects, hospitality design, and wellness-focused environments. With studios in Singapore, Dubai, and London, Elliot's work blends bespoke furniture design, thoughtful client experience, and cultural adaptability to create spaces that function as true sanctuaries.What You'll Learn in This Episode✳️ How to follow passion without fearing creative failure✳️ Building an international interior design career strategically✳️ Networking strategies that lead to word-of-mouth referrals✳️ Taking smart risks to grow your design business✳️ Adapting to cultures in luxury and hospitality marketsRead the Blog >>> 5 Lessons on Building an International Design CareerNEXT STEPS:

Confessions of a Closet Romantic
Bookish: Wuthering Heights/People We Meet on Vacation

Confessions of a Closet Romantic

Play Episode Listen Later Feb 15, 2026 27:49


Send a textI survived a bad winter storm that knocked out my heat and electricity for almost a week, so it's good to be back!Speaking of drama, Wuthering Heights is one of my favorite books of all time, so I've been following the crazy anticipation for the Emerald Fennell film adaptation of the story that just opened this weekend. Adapting beloved classic stories for the screen isn't easy – filmmakers will never manage to perfectly interpret the story for every fan of the source material, and they have to earn attention in ways that books don't. Some filmmakers really get it right, though, despite the challenges, and when they manage to honor the original story while staying committed to their own version of it, beloved characters live another day. Classic books have survived for a reason, and I think they can handle all of the red latex corsets thrown at them.https://www.confessionsofaclosetromantic.comIt was certain that this movie was going to be hot and look stunning, no matter what.I think this is the ultimate review of the film and I haven't even seen it yet. A Tale of Two Hotties.Is this the greatest love story of all time?! Or is that our trauma talking? The 1939 adaptation of Wuthering Heights is full of romance with a R and scenery-chewing, and resembles the book only a little, but it's glorious.I will watch the whole 1939 movie just for this scene.I think they got everything I loved about this book exactly right."Based on the timeless love story by Jane Austen" and after that, they need to sell us on it because we are Fans. Well, personally I bought everything this version of Persuasion was selling.It's far from a perfect movie, and I haven't read the book, but the casting and direction of Which Brings Me to You is perfection, and I was glued to the screen from beginning to end.The Pop Culture Happy Hour from NPR is a podcast and newsletter that covers the "buzziest" movies, TV, music, books, video games and more.   Support the showIf you enjoyed this episode, please click share in your podcast app and tell your friends! Thanks for listening!

Scaling UP! H2O
463 Mapping the Future of Water Innovation with Paul O'Callaghan

Scaling UP! H2O

Play Episode Listen Later Feb 13, 2026 67:56


"If you say something over and over often and enough, it becomes true because perception is reality."  Paul O'Callaghan has built a career at the intersection of water science, wastewater realities, and the practical question every operator and executive eventually faces; what actually moves innovation from idea to adoption.  As Founder and CEO of BlueTech Research, Paul explains how his team helps decision-makers put capital to work more efficiently in water by reducing uncertainty and separating signal from noise. He describes patterns he's watched repeat across water entrepreneurs, pilots, and product market fit, and why "innovation" often breaks down simply because utilities, investors, and founders are using the same word to mean different things.    Capital, fit, and the language gap Paul unpacks what it takes to align an investor's expectations with a technology's true pathway to scale. He contrasts different "types" of innovation and why matching the right investor, entrepreneur, market, and timeline matters as much as the technology itself. The conversation also highlights why solving a problem someone has today is often a safer starting point than betting everything on a problem that might arrive tomorrow.  Regulations as a driver and a risk  Regulation matters in water and wastewater, but Paul cautions against building an entire business on the hope that rules will create a market on schedule. He walks through timing risk, enforcement uncertainty, and why tracking policy momentum matters as much as tracking the text of the regulation itself. He also notes a shift toward more "aspirational" regulation focused on reuse, regeneration, and systems-level outcomes.  Storytelling that changes adoption  From Brave Blue World to Our Blue World, Paul shares what he learned about making water personal and compelling without reducing it to doom-and-gloom narratives. The stories he tells connect to a core professional challenge: technologies enable outcomes, but adoption accelerates when people can see and want the "better" future those outcomes create.  Listen to the full conversation above. Explore related episodes below. Stay engaged, keep learning, and continue scaling up your knowledge!    Timestamps    02:33 - Trace's message on finding "your next love" through learning  09:25 - Words of Water with James McDonald  11:25 - AWT connection and the importance of being challenged by community  13:06 - Industrial Water Week dates for "this year" (Oct 5–9)  14:02 - Upcoming Events for Water Treatment Professionals   19:15 - Interview with Founder & CEO of BlueTech Research, author of The Dynamics of Water Innovation, Executive Producer of Brave Blue World and Our Blue World  22:20 - Pivot moment into water as a career (Malaysia, Edinburgh course, "living machines")  25:15 - What BlueTech Research does (reducing uncertainty, helping capital work efficiently)  27:50 - How startups connect with BlueTech and why storytelling matters  30:09 - Matching investors, entrepreneurs, and markets (alignment and "different languages")  33:00 - The role of regulations (timing risk and market realities)  35:15 - How BlueTech keeps up (themes, emerging areas, and using AI for tracking legislation)  36:30 - Paul's book: The Dynamics of Water Innovation (why he wrote it and who it's for)  40:49 - Documentary storytelling origin and Discovery Channel experience  44:22 - How celebrities got involved and why the outreach worked  45:30 - Why they made a second film and the goal of making water personal  48:03 - Viewer feedback, education impact, and grassroots screening stories  50:08 - "Water 2050" video game inspired by the films  51:21 - Additional ripple effects and "halo" projects (curriculum, photography competition, water walks)  53:06 - Where water innovation is going (desirability, storytelling, and "leaving water")  56:07- Advice for people with ideas (talk to people, generosity of the sector, ikigai, long-term view)  58:08 - Ostara / Crystal Green story (finding the operator's "today problem")  59:54 - One point Paul wants to leave: "It's a journey, enjoy it."    Quotes "We do our best to help people put capital to work more efficiently to solve water challenges."  "Try and find a problem that someone has today, ideally."    Connect with Paul O'Callaghan Email: paul.ocallaghan@bluetechresearch.com   Website: BlueTech Research – Actionable Water Technology Market Intelligence   braveblueworldstudios | Instagram | Linktree   LinkedIn: https://www.linkedin.com/in/o2environmental/     Guest Resources Mentioned   The Dynamics of Water Innovation: A Guide to Water Technology Commercialization by Lakshmi M. Adapa (Author), Paul O'Callaghan (Author), Cees Buisman (Author)   Watch Brave Blue World: Racing to Solve Our Water Crisis | Netflix   Braveblueworldstudios | Instagram | Linktree   "Dynamics of water innovation: Insights into the rate of adoption, diffusion and success of emerging water technologies globally" – Wageningen University & Research  "Wastewater Technology Fact Sheet: The Living Machine" – U.S. EPA  "Brave Blue World" film – Science on Screen synopsis  "Our Blue World: A Water Odyssey" – IMDb overview  "Water Reuse for Industrial Applications Resources" – U.S. EPA  "ANSI/AAMI ST108:2023—Water for the Processing of Medical Devices" – ANSI Blog   "Key EPA Actions to Address PFAS" – U.S. EPA  "The Philosophy of Ikigai: 3 Examples About Finding Purpose" – PositivePsychology.com   Fluke: Chance, Chaos, and Why Everything We Do Matters Paperback by Brian Klaas   Rivers of Power: How a Natural Force Raised Kingdoms, Destroyed Civilizations, and Shapes Our World Paperback by Laurence C. Smith    Scaling UP! H2O Resources Mentioned  AWT (Association of Water Technologies)  Scaling UP! H2O Academy video courses  Submit a Show Idea  The Rising Tide Mastermind 415 Green Building Updates: What You Need to Know  004 It's Not Easy Being Green!  032.5 The One That Takes You to AWT's 2018 Technical Training]  022 The One with Tim Fulton  280 The One About Retaining Top Talent  368 Adapting to the New Workforce: Attracting Top Talent 413 Charting the Future: Mastering the Art of Strategic Planning    Words of Water with James McDonald  Today's definition is a single, reactive molecule, usually an organic compound, having the ability to join with a number of similarly defined molecules to form a polymer.    2026 Events for Water Professionals  Check out our Scaling UP! H2O Events Calendar where we've listed every event Water Treaters should be aware of by clicking HERE. 

#dogoodwork
The Service Stack: What Remains When Client Services Are Being Eaten by AI

#dogoodwork

Play Episode Listen Later Feb 12, 2026 38:12 Transcription Available


AI's Impact on Client Services: What You Need to KnowDive into the transformative effects of AI on client services with Raul. Discover why AI won't just be a tool, but a replacement for many tasks in agencies and consultancies. Raul shares real-world examples, offers a detailed breakdown of the five layers of client service work, and provides insights on how to adapt in this new era of value creation. Learn how to position yourself and your business to thrive amidst these changes.00:00 Introduction: The AI Revolution in Client Services00:33 Real-World Applications of AI02:41 The Future of Client Services04:50 Understanding the Services Stack05:46 Layer 1: Execution07:30 Layer 2: Template Strategy09:19 Layer 3: Judgment-Driven Strategy12:42 Layer 4: Transformation and Accountability16:21 Layer 5: Belief19:54 The Power of Belief in Client Relationships21:35 The Future of the Agency Model22:40 Introducing the Craft Model23:35 AI's Role in the Craft Model25:20 Adapting to the New Reality28:59 Practical Steps for Transitioning35:29 Final Thoughts and Call to Action

The Coaching 101 Podcast
The Key to Long-Lasting Coaching Success w/ Kevin Swift

The Coaching 101 Podcast

Play Episode Listen Later Feb 12, 2026 80:02


Join hosts Daniel Chamberlain and Kenny Simpson on the Coaching 101 Podcast as they welcome special guest Kevin Swift from Oregon. In this episode, they delve into the importance of making football simple for success, exploring Coach Swift's impressive 41-year career. They cover topics like avoiding coaching burnout, evolving as a coach without losing core identity, the importance of relationships over X's and O's, and sustaining longevity in the coaching profession. Additionally, hear insights about balancing personal life with a coaching career, building a supportive community, and the significance of player relationships. Tune in for a wealth of knowledge and experience from seasoned coaches who have thrived in the football industry.00:00 Introduction to the Coaching 101 Podcast00:34 Meet Coach Kevin Swift01:50 Coach Swift's Coaching Journey05:27 Quote of the Week and Sponsor Shoutouts09:36 Discussing Longevity in Coaching16:30 Challenges and Rewards of Coaching36:46 Evolving as a Coach39:37 The Importance of Innovation in Coaching40:19 Building a Collaborative Coaching Environment41:18 Adapting to New Defensive Strategies41:48 The Journey of Becoming a Head Coach42:52 Challenges of Coaching in Small Towns43:52 Developing Assistant Coaches from Scratch45:32 The Role of Senior Players in Coaching46:14 Creating a Winning Culture48:34 Balancing Football and Personal Life54:54 Evolving Coaching Philosophies56:06 The Importance of Relationships in Coaching01:00:39 Sustaining a Long Coaching Career01:08:21 Closing Thoughts and Resources for CoachesDaniel Chamberlain: @CoachChamboOK ChamberlainFootballConsulting@gmail.com chamberlainfootballconsulting.com Kenny Simpson: @FBCoachSimpson fbcoachsimpson@gmail.com FBCoachSimpson.com

Property Profits Real Estate Podcast
Adapting and Overcoming in Real Estate with Tommy Hardaway

Property Profits Real Estate Podcast

Play Episode Listen Later Feb 12, 2026 11:10


Marine veteran and investor Tommy Hardaway shares how he's transitioned from single-family homes to investing in RV resorts and self-storage facilities. Learn why these asset classes appeal to him, how they perform in economic downturns, and why short-term RV stays near Dollywood are part of his winning strategy. Get Interviewed on the Show! - ================================== Are you a real estate investor with some 'tales from the trenches' you'd like to share with our audience? Want to get great exposure and be seen as a bonafide real estate pro by your friends? Would you like to inspire other people to take action with real estate investing? Then we'd love to interview you! Find out more and pick the date here: http://daveinterviewsyou.com/ #propertyprofits #rvresortinvesting #realestatewealth #podcastinterview

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

What in the Wedding
False AI Reviews and Venue Pricing

What in the Wedding

Play Episode Listen Later Feb 12, 2026 52:21


SummaryIn this episode of What in the Wedding, hosts Hannah and Ashley discuss the evolving landscape of wedding planning, focusing on trends in venues, the impact of AI on photography, and the importance of adapting to client priorities. They explore how budget constraints are shaping vendor choices and the significance of networking within the industry. The conversation highlights the shift towards more personalized and experience-driven weddings, emphasizing the need for vendors to stay updated and flexible in their offerings.Chapters00:00 Welcome to Wetting the Wedding Podcast01:45 Trends in Wedding Venues and AI Impact03:52 Shifts in Wedding Photography Priorities07:44 Adapting to Changing Wedding Trends11:42 The Role of AI in Wedding Planning17:37 Navigating Budget Constraints in Weddings23:30 The Evolution of Wedding Vendor Relationships29:23 The Future of Wedding Planning and Trends35:32 Networking and Learning in the Wedding Industry41:27 Closing Thoughts and Listener EngagementTakeawaysExpect the unexpected in wedding planning.AI is changing the landscape of wedding photography.Budget constraints are affecting vendor choices.Couples are prioritizing experiences over traditional elements.The wedding industry is seeing a shift towards backyard and tent weddings.Networking is crucial for staying updated in the wedding industry.Vendors need to adapt to changing client priorities.Communication between vendors is key to a successful wedding day.The importance of customization in wedding packages.Understanding generational differences in wedding planning.Keywordswedding planning, wedding trends, photography, AI in weddings, budget constraints, vendor relationships, wedding venues, wedding industry, networking, wedding photography Hosted on Acast. See acast.com/privacy for more information.

Hybrid Ministry
Episode 188: How I Planned Youth Group - 3 Weekly Challenges

Hybrid Ministry

Play Episode Listen Later Feb 12, 2026 12:16


Can I design a compelling youth night from scratch each week with a different order - while also creating a brand-new DYM game from scratch? Oh - And stick around to the end of the video, because I'm going to tell you how you can get this game that's not even public yet in the pipeline, FOR FREE! Let's find out! ACCESS TO FREE GAME & RECAP EPISODE https://www.patreon.com/posts/free-game-winter-150284516?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link SHOW NOTES Shownotes & Transcripts https://www.hybridministry.xyz/188 ❄️ WINTER SOCIAL MEDIA PACK https://www.patreon.com/posts/winter-seasonal-144943791?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link HYBRID HERO MEMBERS GET IT FREE! https://www.patreon.com/hybridministry YOUTH MINISTRY LEADER COHORT (It's FREE!) https://www.ymlcohort.com/

The Bookkeepers' Podcast
Why Compliance-Only Bookkeepers Get Stuck

The Bookkeepers' Podcast

Play Episode Listen Later Feb 12, 2026 35:32


Jo and I always think that for bookkeepers, having a little black book is the starting point to advisory. In this episode, we dive deep into the world of bookkeeperz advisory services and how they can elevate your business. We chat with Sam from Nexus, who shares invaluable insights on building partnerships that can help you become a better business owner. Discover how to spot opportunities for your clients, from cost savings to funding options, and learn how to have those crucial conversations that can make a real difference. Chapters: 00:00:00 - Why 'Compliance-Only' Bookkeepers Get STUCK 00:00:51 - About Nexus 00:02:04 - Services for Bookkeepers 00:04:08 - Experience with VAT Funding 00:05:28 - Advisory Services 00:09:07 - Understanding Client's Needs 00:10:30 - The Importance of Listening 00:12:16 - Proactive Approach 00:12:45 - Sam's Journey 00:15:18 - Sam's Career Shift 00:16:57 - Furlonteer Initiative 00:17:46 - Transition to Tech 00:18:35 - Entrepreneurial Spirit 00:20:32 - Studenteer Initiative 00:23:17 - Challenges of Starting a Business 00:24:49 - Sustainable Business Growth 00:27:22 - Building and Using a Network 00:31:06 - Helping Bookkeepers 00:31:32 - Motivation and Business Model 00:32:08 - Adapting to Change and New Opportunities 00:33:04 - Building Network through Connection 00:33:51 - Becoming the Go-To Person 00:34:55 - Reaching Out to Sam ----------------------------------------------- About us We're Jo and Zoe and we help bookkeepers find clients, make more money and build profitable businesses they love. Find out about working with us in The Bookkeepers' Collective, at: 6figurebookkeeper.com/collective ----------------------------------------------- About our Sponsor This episode of The Bookkeepers' Podcast is sponsored by Xero. Get 90% off your first 6 months by visiting: https://www.xero.com/uk/campaign/referral-influencer/?utm_medium=influencers&utm_source=partnerstack&utm_campaign=8e10854455f4&ps_partner_key=OGUxMDg1NDQ1NWY0&ps_xid=kNFl7kZNBfWqfg&gsxid=kNFl7kZNBfWqfg&gspk=OGUxMDg1NDQ1NWY0 ----------------------------------------------- Promotion This video contains paid promotion. ----------------------------------------------- Disclaimer The information contained in The Bookkeepers' Podcast is provided for information purposes only. The contents of The Bookkeepers' Podcast is not intended to amount to advice and you should not rely on any of the contents of the Bookkeepers' Podcast. Professional advice should be obtained before taking or refraining from taking any action as a result of the contents of the Bookkeepers' Podcast. The 6 Figure Bookkeeper Ltd disclaims all liability and responsibility arising from any reliance placed on any of the contents of the Bookkeepers' Podcast.

Birds 365: A Philadelphia Eagles Podcast
Mike Gill on Eagles Offensive OVERHAUL, Jalen Hurts Adapting & Jeff Stoutland Departure | B365

Birds 365: A Philadelphia Eagles Podcast

Play Episode Listen Later Feb 11, 2026 46:27


Mike Gill joins Birds 365 to break down the Eagles' complete offensive overhaul, how Jalen Hurts must adapt to the new scheme, and the massive impact of Jeff Stoutland's departure.Support this podcast at — https://redcircle.com/birds-365/donationsPrivacy & Opt-Out: https://redcircle.com/privacy

IN-the-Know
Skills for the Future of Insurance with Diane Hanlon

IN-the-Know

Play Episode Listen Later Feb 11, 2026 25:27


Diane Hanlon serves as Head of Sales and Market Development at The Institutes. With more than 23 years of experience in Fortune 500 B2B sales, account management, and contract management, Diane brings deep expertise in business development and client relationship management. Before joining The Institutes, she held senior sales leadership roles at On Call International, a Tokio Marine HCC company, and Enterprise Holdings. In this episode of In the Know, Chris Hampshire and Diane discuss sales leadership and insurance industry careers, the latest initiatives at The Institutes, and the value of the CPCU designation journey in her career.   Key Takeaways ● Diane did not take the traditional risk management path. ● The most appealing aspects of the insurance industry. ● Benefits of consulting within the sales sector. ● Characteristics of successful salespeople. ● Protocols for retaining B-to-B sales arena opportunities. ● Questions to ask yourself before moving into the sales sector. ● Addressing industry talent gaps. ● Adapting to future technologies in the insurance industry. ● The evolution of training and development. ● The future of international insurance education and career path development. ● Diane's experience with international insurance interconnectivity. ● Advice to anyone who is considering a career in insurance. ● The 'addictive' journey of earning a CPCU designation. ● A five-year look at the future of the insurance industry. ● Diane's fulfilling advice to her early career self.   In the Know podcast theme music written and performed by James Jones, CPCU, and Kole Shuda of the band If-Then.   To learn more about the CPCU Society, its membership, and educational offerings, tools, and programs, please visit CPCUSociety.org.   Follow the CPCU Society on social media: X (Twitter): @CPCUSociety Facebook: @CPCUSociety LinkedIn: @The Institutes CPCU Society Instagram: @the_cpcu_society   Quotes ● "There are so many transferable skills that can be used in the insurance industry." ● "Building relationships is the key to the successes you're going to have." ● "The skills that someone needs today are going to look different in the coming years, and people need to be adaptable." ● "We work with all verticals to ensure they have what they need to be better at what they do."  

The Lead
CNN International's Jenni Watts on leading and adapting in a changing media world

The Lead

Play Episode Listen Later Feb 11, 2026 17:38


Jenni Watts, executive producer at CNN International, took an unconventional path into journalism. Beginning as a philosophy major at Auburn before moving into radio and eventually television. Her work spans environmental, cultural and technology features, with a career defined by adaptability and curiosity rather than following a linear path.   In this episode, Watts reflects on how her early career shifts shaped her approach to storytelling and why technical knowledge and strong interpersonal skills are essential in modern journalism. She also shares how covering the events of 9/11 solidified her commitment to the field, and offers advice to early-career journalists on embracing opportunity, staying curious and maintaining integrity as AI reshapes the industry. Find The Lead podcast on Spotify and Apple Podcasts at the link in bio. bit.ly/m/coxinstitute  Guest: Jenni Watts, Executive Producer at CNN International Host: Sidney Josephs

Morning Majlis
How Sharjah's Education System is Adapting and Improving for the Future (11/02/26)

Morning Majlis

Play Episode Listen Later Feb 11, 2026 19:28


Ever wondered how we can modernize and improve our educational system? Or how pivotal and effective artificial intelligence is going to be for our children's education? Well then hear from Wajdi Manai, Chair of Sharjah International Summit on Improvement in Education Scientific Committee, who elaborates on the ever-evolving programmes and teaching techniques in order to fully keep up-to-date with the ever-changing world and labor market. Wajdi reassures us that artificial intelligence can be a benefit for our students, rather than an inhibitor or restrictor that seeks to replace. This improvement is evident with the onset of the 5th edition of the International Summit on Improvement in Education, which takes place in Sharjah and invites you down to have your voices heard on how you would improve the educational journey. Listen to #Pulse95Radio in the UAE by tuning in on your radio (95.00 FM) or online on our website: www.pulse95radio.com ************************ Follow us on Social. www.facebook.com/pulse95radio www.twitter.com/pulse95radio

Better Call Daddy
470. Learning to Be a Better Father After Loss: Bestselling Author G. Michael Hoff

Better Call Daddy

Play Episode Listen Later Feb 10, 2026 68:54


"Life is for the living." — G. Michael Hoff In this heartfelt episode of Better Call Daddy, host Reena Friedman Watts and her dad, Wayne Friedman, reconnect with the talented G. Michael Hoff, a bestselling author and master communicator. G. Michael shares his journey through the ups and downs of the creative process, revealing the importance of resilience and adaptation in both life and storytelling. Facing Challenges G. Michael opens up about the hurdles he faced while trying to adapt his novella into a film, including the impact of the Screen Actors Guild strike. He candidly discusses the emotional rollercoaster of raising funds and the lessons learned from setbacks, emphasizing the need to keep moving forward despite adversity. The Power of Storytelling As a two-time guest on the show, G. Michael dives into his passion for storytelling and how he leverages AI to enhance communication without sacrificing creativity. He shares insights on the evolving landscape of content creation and the importance of embracing new technologies to stay relevant in a fast-paced world. Life Lessons and Legacy Throughout the conversation, G. Michael Hopf reflects on the profound impact of personal loss and how it has shaped his perspective as a father. He offers wisdom on mourning, resilience, and the significance of being present for loved ones, encouraging listeners to focus on the beauty and opportunities life has to offer. Key Themes - Navigating challenges in the creative industry - The transformative power of storytelling - Embracing technology and AI in content creation - The importance of resilience and adaptation - Finding fulfillment and purpose in life after loss Episode Highlights (00:00) Welcome to the Better Call Daddy Show (01:20) Catching Up with G. Michael Hoff (10:30) The Journey of Adapting a Novella into Film (20:00) Leveraging AI for Enhanced Communication (30:15) Life Lessons from Personal Loss (40:45) Wisdom from Wayne: Life is for the Living Episode Keywords Better Call Daddy, Podcast, Storytelling, Resilience, AI in Writing, Creative Process, Personal Growth, Life Lessons, Fulfillment, Fatherhood, Overcoming Adversity, Emotional Healing, Technology in Content Creation G. Michael Hopf is a multifaceted individual whose life experiences have significantly contributed to his career as a USA Today bestselling author. He describes himself modestly as "just a guy stringing words together," but his background tells a story of adventure and dedication. Hopf is not only a writer but also a devoted father, husband, and a veteran, underscoring his diverse life experiences and perspectives.   Before venturing into the world of writing, Hopf served in the U.S. Marine Corps, which provided him with a rich tapestry of experiences that would later influence his writing. His service as a combat veteran has imbued his works with a sense of realism and depth, particularly in themes related to survival, resilience, and the complexities of human nature in challenging circumstances.   Following his military service, Hopf worked as a bodyguard and commercial diver, further diversifying his life experiences. This role likely exposed him to a variety of situations and individuals, broadening his understanding of human interactions and the many facets of society.   Now residing in San Diego with his family, Hopf has fully embraced his passion for writing and publishing. He is best known for his New World series, which delves into post-apocalyptic scenarios, exploring how humanity might respond to cataclysmic events. His works often intertwine elements of action, adventure, and political intrigue, capturing the imaginations of readers who are drawn to speculative and survivalist narratives.   Hopf's commitment to his craft and his ability to draw from his life experiences have made him a prominent figure in the genres of post-apocalyptic fiction and westerns. His works not only entertain but also provoke thought about the resilience of the human spirit in the face of adversity. Connect with G. Michael Hoff Website Connect with Reena Friedman Watts Website | LinkedIn | Instagram | YouTube Thank you for tuning in to Better Call Daddy where stories of growth, resilience, and understanding come together!   If you enjoyed this episode, check out the previous one with Scott Ferguson for more insights on creativity and storytelling. Leveling Up Your Life

Venture Everywhere
Riding the Wave: Helaine Knapp with Pau Sabria

Venture Everywhere

Play Episode Listen Later Feb 10, 2026 24:55


In episode 106 of Venture Everywhere, Pau Sabria, co-founder of Remotely.works—a platform helping companies hire software engineers in Latin America—talks with Helaine Knapp, founder and former CEO of CityRow, a rowing fitness franchise that was acquired. Helaine reflects on how her career in tech startups at Buddy Media and Olapic gave her the foundation to build a brick-and-mortar fitness business. She discusses writing Making Waves, using brutal honesty to tell the entrepreneurial story most founders won't share, and navigating the in-between as host of Step Into Next, a podcast about walking together from what was to what's next.In this episode, you will hear:Adapting startup playbooks to the physical fitness business modelInsights into the rise and fall of Connected Fitness, and lessons learnedThe power of honest storytelling during a company's challenging exitThe real challenges of entrepreneurship: navigating legal battles and team dynamicsReimagining success and personal growth while transitioning between venturesLearn more about Helaine Knapp | CITYROWLinkedIn: https://www.linkedin.com/in/helaine-knappWebsite: www.helaineknapp.comLearn more about Pau Sabria | Remotely.worksLinkedin: https://www.linkedin.com/in/pausabriaWebsite: https://www.remotely.works/

Hey Non-Profits, Raise More Money!
Build and Grow Your Donor Pipeline With Intentional Fundraising Events

Hey Non-Profits, Raise More Money!

Play Episode Listen Later Feb 10, 2026 30:41


Nikki DeFalco, Vice President of Fundraising at the MS Society, shares how fundraising events are still the best way to build donor pipelines.There has been a surge recently in the following question: are events still worth it? The answer is, absolutely! Nikki and Trevor met on the podcast to discuss how community engagement, localization, and creating enjoyable experiences for attendees can lead to stronger donor pipelines for any nonprofit.Their conversation dives into:- Donor retention- The significance of feedback in event planning- Building and leading a fundraising team- The need for personal connection- Adapting communication for different generational preferencesHave a question or topic you'd like us to cover? Let us know https://hgafundraising.com/ask-your-questions/

Women In Product
Recharged and Ready: The Value of Good Transitions & Disconnecting From Titles

Women In Product

Play Episode Listen Later Feb 10, 2026 47:22


Deb Liu joins Carmen Palmer for the first episode in our monthly series, In The Lead. Deb shares insights from her year away from titles and company affiliations - what she did, what she learned and the clarity she is carrying forward in 2026. As always, Deb has great insights and advice.In The Lead is a new monthly series on Product Rising. It will share thought provoking conversations hosted by Carmen Palmer, CEO of Women In Product, with a wide range of industry leaders. It uses a revolving set of questions to get an engaging combination of hot takes and deep insight into the current state of product, working and leading in today's technology industry, and building effective organizations.00:45 Meet Deb Liu: Tech Executive and Author02:28 Defining Success in the New Year03:03 The Blank Name Tag Club04:58 Navigating Career Transitions09:27 Overcoming Failure: The Facebook Marketplace Story14:08 Building Perseverance Through Adversity18:43 The Evolving Role of Product Managers22:07 Understanding Customer Needs in AI Startups23:26 Understanding Customer Needs24:01 The Role of Product Managers25:14 Strategic Positioning in Companies26:09 Adapting to Technological Changes28:35 Leadership and Empathy29:33 Opportunities for Product Leaders in AI33:07 Personal Experiences and Insights36:07 Challenges in Corporate Life43:15 Future of Leadership and Technology45:59 Closing Reflections

Down To Business
Adapting to industry change

Down To Business

Play Episode Listen Later Feb 10, 2026 24:02


This week's guest is Josephine Moran, president & CEO of Ledyard Bank. The conversation dives into how the community bank is leveraging technology to better serve its clients and how the team is expanding their footprint in the region.

Boardroom Governance with Evan Epstein
Betsy Atkins: Why Directors Must Become More Entrepreneurial and Change-Adaptive

Boardroom Governance with Evan Epstein

Play Episode Listen Later Feb 9, 2026 62:41


(0:00) Intro(2:04) About the podcast sponsor: The American College of Governance Counsel(2:50) Start of interview(3:51) Betsy's origin story(9:14) The HealthSouth Board Scandal(16:35) Her preference when picking what boards to serve on(17:30) Insights VC-backed Boards and role and profile of the independent director in this context(21:20) Insights on PE-backed Boards and role and profile of the independent director in this context(25:35) Navigating International Board Dynamics. Her experience on boards of Volvo and Schneider Electric.(30:57) The Rise of Private Markets. Example of Atlas Air (Apollo backed). IPOs in 2026.(35:07) AI's Impact on the Market and other macro trends(38:10) Founder-Led Companies and Governance (including dual-class share structures).(42:25) The Impact of Geopolitics on Governance(45:11) The Impact of Politicization on Governance. Examples of Budweiser, Google, Netflix, and the mission-driven approach by Coinbase.(50:09)  Adapting to Accelerating Change as Directors. The problem with incrementalist "custodian" directors in times of disruption. "It's really about being change-adaptive and comfortable making decisions with incomplete information. You look at someone like Musk, he's making decisions when he has 60% of the information. Most boards want 95% before they'll move. That's the fundamental challenge."(55:58) Books that have greatly influenced her life ("the best business book"):Good to Great, by Jim Collins (2001)(56:16) Her mentors. Craig Billings (CEO Wynn Resorts), Michael Steen (CEO Atlas Air Cargo), Jean-Pascal Tricoire (Chairman, Schneider), her mom ("her biggest mentor").(57:06) On the current state of shareholder activism(57:58) Quotes that she thinks of often or lives her life by "Perfect is the enemy of good enough." (58:19) An unusual habit or an absurd thing that she loves: she's a compulsive note-taker (plus, her recommended policy for directors)(1:00:12) The living person she most admires: Elon MuskBetsy Atkins has served on more than 38 public company boards and through 17 IPOs, in addition to scores of PE and VC-backed company boards. She brings a rare perspective shaped by crisis situations, international board service, and rapid technological change. She currently serves on the boards of Wynn Las Vegas, GoPuff, and the Google Cloud Advisory Board. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License

Race Industry Now!
Inside E1 Series: Rodi Basso on Electric Racing, RaceBird Tech & Sustainable Water Mobility

Race Industry Now!

Play Episode Listen Later Feb 9, 2026 32:30


During Race Industry Week by EPARTRADE, Rodi Basso, Founder & CEO of the E1 Series, shares the inside story behind the world's first all-electric powerboat racing championship and explains how E1 is shaping the future of sustainable water mobility.Born during the COVID pandemic and developed in collaboration with Alejandro Agag (Formula E, Extreme E), E1 was conceived not just as a racing series, but as a global technology laboratory for electric propulsion on water. Backed by Basso's extensive experience at Formula One, McLaren, and Magneti Marelli, the championship secured a landmark 25-year exclusivity agreement with the UIM, officially launching the World Electric Powerboat Championship in Monaco.At the heart of E1 is the revolutionary RaceBird — a purpose-built electric raceboat that uses hydrofoils to lift the hull above the water, dramatically reducing drag. Despite water being nearly 800 times denser than air, the RaceBird achieves exceptional efficiency, with the battery accounting for less than 20% of total weight and top speeds approaching 52 knots.

Beauty and Braids
#146 Beyond the Chair: The Realities of Glam, Grit, and Growth with Gabrielle Corney

Beauty and Braids

Play Episode Listen Later Feb 8, 2026 32:39


In this engaging conversation, host Karyne Tinord speaks with renowned hairstylist Gabrielle Corney about her extensive journey in the beauty industry. They explore the challenges and triumphs of being a hairstylist, including the importance of adapting creativity in various environments, navigating the business side of beauty, and the realities of burnout. Gabrielle shares personal insights on maintaining authenticity while staying relevant in a constantly evolving industry, and emphasizes the significance of kindness and self-care. The discussion also touches on financial literacy and the need for boundaries in both personal and professional life.Takeaways:Gabrielle has been in the beauty industry for almost 33 years.Adapting creativity involves research and understanding client energy.Many stylists struggle with the business side of beauty.Financial literacy is crucial for long-term success in beauty.Burnout is a common challenge that requires balance and self-care.Staying relevant means being kind and authentic in your work.Healthy hair is a priority for many clients.Setting boundaries is essential for mental health.Two givers in a relationship create a magical dynamic.Personal growth often comes from learning to say no.Chapters:00:00 Introduction to the Beauty Journey02:12 Adapting Creativity in Diverse Environments05:17 Navigating the Business Side of Beauty10:22 Burnout: A Common Challenge in the Industry18:16 Staying Relevant and Authentic in Beauty24:40 Personal Insights and Recommendations To connect with Gabrielle Corney; follow on instagram

Hyper Conscious Podcast
Your Normal Is Not THE Normal (2336)

Hyper Conscious Podcast

Play Episode Listen Later Feb 7, 2026 23:53 Transcription Available


Stop accepting less than you're capable of. In today's episode, Kevin and Alan break down how lowered standards quietly become your normal and why comfort slowly erodes performance. They challenge distorted self-perception, weak discipline, and passive habits that limit long-term growth.This conversation centers on self-awareness, personal responsibility, and identity-driven execution. It reinforces the importance of clear standards, consistent action, and ownership of results. Do not just listen. Raise your baseline and operate at the level you were built for._______________________Learn more about:Alan's coaching, “Business Breakthrough Session.” Your first 30-minute call is FREE. This call is designed to help you identify bottlenecks and build a clear plan for your next level. - https://calendly.com/alanlazaros/30-minute-breakthrough-sessionJoin our private Facebook community, “Next Level Nation,” to grow alongside people who are committed to improvement. - https://www.facebook.com/groups/459320958216700Track the Work. Earn the Results. To know more about the "Next Level Fitness Accountability Group," reach out via Instagram.Kevin: https://www.instagram.com/neverquitkid/Alan: https://www.instagram.com/alazaros88/_______________________NLU is not just a podcast; it's a gateway to a wealth of resources designed to help you achieve your goals and dreams. From our Next Level Dreamliner to our Group Coaching, we offer a variety of tools and communities to support your personal development journey.For more information, check out our website and socials using the links below.

Perpetual Traffic
How to Create a 'Moat' Around Your Company/Department Like Meta

Perpetual Traffic

Play Episode Listen Later Feb 6, 2026 57:24


Are you hesitant to invest in your team, fearing they might leave after all that time and money? What if the real risk is not investing, and they stay uninspired? In this episode, Lauren and I chat about how investing in your team can create a powerful internal moat that attracts the right people and drives your business forward.We discuss a recent initiative in Lauren's company where a team member took the initiative to improve internal communication and create an SOP for Slack use. It's a perfect example of why fostering initiative and empowering employees to take ownership can elevate your entire team's performance.If you're struggling to create that type of culture, this conversation will show you how to reevaluate your core values, ensure your team's alignment, and ultimately build a work environment where the best talent thrives. We also explore how these ideas translate into digital marketing, leadership, and managing remote teams effectively.In This Episode:- Core values, employee initiative, & continuous learning- The risk of not investing in your team and gatekeeping- Meta's strategic investments in employee acquisition and AI - Creating an internal moat for your business- The people analyzer process based on core values- Adapting to external challenges in digital marketing- Why radical candor and emotional intelligence are critical- Final thoughts on creating a moat and call to actionMentioned in the Episode:Gino Wickman's book, Traction: https://a.co/d/01q1TP4O Patrick Lencioni's book, The Ideal Team Player: https://a.co/d/0cROW6f Creating custom emojis on Slack: https://slack.com/help/articles/206870177-Add-custom-emoji-and-aliases-to-your-workspace Listen to This Episode on Your Favorite Podcast Channel:Follow and listen on Apple: https://podcasts.apple.com/us/podcast/perpetual-traffic/id1022441491 Follow and listen on Spotify:https://open.spotify.com/show/59lhtIWHw1XXsRmT5HBAuK Subscribe and watch on YouTube: https://www.youtube.com/@perpetual_traffic?sub_confirmation=1We Appreciate Your Support!Visit our website: https://perpetualtraffic.com/ Follow us on X: https://x.com/perpetualtraf Connect with Ralph Burns: LinkedIn - https://www.linkedin.com/in/ralphburns Instagram - https://www.instagram.com/ralphhburns/ Hire Tier11 - https://www.tiereleven.com/apply-now...

Macro Hive Conversations With Bilal Hafeez
Ep. 344: Sean McGould on Multi-Strategy Investing, Trump 2.0, and the AI 'Digital Tool' Era

Macro Hive Conversations With Bilal Hafeez

Play Episode Listen Later Feb 6, 2026 32:39


Sean McGould – the founder/CEO of the Lighthouse Group – an approximately $17 billion investment management firm. Prior to Lighthouse, Sean was the Director of the Outside Trader Investment Program for Trout Trading Management Company. Before joining Trout, he worked for Price Waterhouse in auditing and corporate finance. In this podcast we discuss:   The Multi-Strategy Investment Approach The Selective "War for Talent"  Adapting to Trump 2.0 Volatility  Targeting Real Returns vs Gold  AI: A Digital Tool, Not a Total Bubble  Redefining Value in the Digital Age  Japan's Shareholder Value Pivot  Centralised Planning Risks in China  Patience in Tight Credit Markets  The 2026 Macro Outlook  You can get more information on Sean's firm here  The commentary contained in the above article/podcast does not constitute an offer or a solicitation, or a recommendation to implement or liquidate an investment or to carry out any other transaction. It should not be used as a basis for any investment decision or other decision. Any investment decision should be based on appropriate professional advice specific to your needs. 

Sustainable(ish)
[189] – Climate Fresk (and friends) with Ash Goddard

Sustainable(ish)

Play Episode Listen Later Feb 6, 2026 51:07


Could you really explain climate change to your kids, or to your nana if you needed to?By and large, I think the vast majority of people are aware of climate change, understand that it means the planet is warming and that it's bad, but if pushed for detail beyond that, we might struggle. Climate Fresk is an interactive card based workshop that invites participants to uncover the science of climate change and it's impacts, by working together, and then holds a space to unpack some of the emotions that might come with this new understanding, and then moving on to look at solutions and things we can do. It's a concept that started in France, and has now had over 2.3million participants worldwide, and what I really like about it is that we can all get involved. You can join a workshop online, or there might be one happening in person locally to you, and then you can quickly move on to becoming a facilitator and delivering the workshop in your own community or workplace.In today's episode I'm chatting to Ash Goddard from Climate Clarity who is one of the most experienced Fresk facilitators in the UK, who also facilitates a whole host of related workshops such as the Biodiversity Collage, and the Adapting to Climate Change workshop. I've been lucky enough to attend a couple of Ash's workshops, and he is a brilliant facilitator -who makes it look easy. And while we might not all be able to be as good as Ash, at least not straightaway, I love the fact that anyone can start to deliver the Fresk in their own communities, schools or workplaces pretty quickly, and I hope that maybe a few people who listen in to this episode might be tempted to join a Climate Fresk, and then even take the next steps to spread the knowledge to people they know. Ash Goddard LISTEN... USEFUL LINKS:Climate Clarity- Website- Workshops- Events- Facebook- Instagram Climate Fresk- Website- Find a workshop- Become a facilitator- Facebook- InstagramIPCC reportsDigital CollageBiodiversity CollageAdapting to climate change CollagePlanetary Boundaries FrescoFind a FreskWorkshops for the planet Are you going to do a bit of investigating into your money, and your current accounts and savings after listening? Or have you already moved your money? Do let me know! […]

Farm and Ranch Report
Adapting Agtech to Local Environments

Farm and Ranch Report

Play Episode Listen Later Feb 6, 2026


In today's farm economy, no farmer is going to invest in technology unless it is proven and well-suited for their specific farm conditions.

The Coaching 101 Podcast
Mastering Offensive Line Development and Recruiting Insights with Coach Joel Nellis

The Coaching 101 Podcast

Play Episode Listen Later Feb 5, 2026 70:00


In this episode of The Coaching 101 Podcast, hosts Daniel Chamberlain and Kenny Simpson discuss offensive line development and recruiting with Joel Nellis, Head Football Coach at Brookfield Central High School and owner of Trench Training. Coach Nellis covers the fundamentals of offensive line play, the importance of get-off, footwork, and core strength. He provides insights on the height and weight benchmarks for different levels of play. The discussion also explores the value of non-padded showcases and how to recognize a well-coached offensive line. Additionally, Coach Nellis talks about his trench training program and the importance of aligning with high school coaches to best serve the athletes.00:00 Introduction to the Coaching 101 Podcast00:49 Meet Coach Joel Nellis01:22 Joel Nellis' Coaching Journey02:55 Family and Community Involvement04:30 The Importance of Offensive Linemen06:03 Quote of the Week and Sponsor Messages11:10 Recruitment Insights for Offensive Linemen31:33 Coaching High School Basketball: Adjusting to Player Sizes32:07 Strategies for Overcoming Depth Issues33:23 Teaching Versatility in Player Positions34:07 Protecting Weaker Players in Offensive Line35:42 Importance of Consistent Coaching and Culture42:29 Adapting to Different Player Levels and Development44:15 The Value of Repetition in Training47:36 Evaluating the Effectiveness of Camps and Showcases53:07 Identifying Well-Coached Offensive Lines56:48 Promoting Trench Training Camps and Resources01:01:07 Closing Remarks and Contact InformationDaniel Chamberlain: @CoachChamboOK ChamberlainFootballConsulting@gmail.com chamberlainfootballconsulting.com Kenny Simpson: @FBCoachSimpson fbcoachsimpson@gmail.com FBCoachSimpson.com

Mind of a Football Coach
Adapting Offense to Fit Your Team with Rob Zimmerman

Mind of a Football Coach

Play Episode Listen Later Feb 5, 2026 31:20


In this episode, Coach Rob Zimmerman shares insights from his 27 years as the head coach at DeWitt High School in Michigan. He discusses the importance of adapting coaching strategies to fit the unique personalities and skills of players, the significance of a strong youth program, and the value of continuity within the coaching staff. Coach Zimmerman emphasizes the need for trial and error in developing effective offensive strategies, the importance of self-scouting, and how to prepare for high-stakes games. He also offers advice for young coaches on the importance of mentorship and continuous learning in the field of coaching. Chapters 00:00 Introduction to Coach Rob Zimmerman 03:00 Longevity and Success in Coaching 05:48 Adapting Offense to Fit Personnel 09:01 Trial and Error in Coaching Philosophy 12:01 Offseason Preparation and Evaluation 15:07 Self-Scouting and Game Preparation 17:57 Adjusting Practice Based on Team Dynamics 21:00 Experiences at Ford Field 23:55 The Importance of Physical Football 26:01 Advice for Young Coaches Learn more about your ad choices. Visit megaphone.fm/adchoices

Systemize Your Success Podcast
Why Most Remote Teams Struggle—and the Leadership Shift That Fixes It with Nicolas Bivero, Co‑founder of Penbrothers | Ep 263

Systemize Your Success Podcast

Play Episode Listen Later Feb 5, 2026 52:02


Coffee with Butterscotch: A Game Dev Comedy Podcast
#[Ep558] How Many Dudes: Designing Games For Success

Coffee with Butterscotch: A Game Dev Comedy Podcast

Play Episode Listen Later Feb 4, 2026 52:58


In episode 558 of 'Coffee with Butterscotch,' the brothers dig into how community engagement is shaping the ongoing development of How Many Dudes, from reacting to feedback to refining what players latch onto. They talk about the role luck plays in getting eyes on a game, and how fast iteration helps them adapt when discovery, localization, or influencer interest suddenly spikes. The conversation centers on staying nimble, listening closely, and adjusting strategy as the game and the audience keeps evolving.Support How Many Dudes!Official Website: https://www.bscotch.net/games/how-many-dudesTrailer Teaser: https://www.youtube.com/watch?v=IgQM1SceEpISteam Wishlist: https://store.steampowered.com/app/3934270/How_Many_Dudes00:00 Cold Open00:26 Introduction and Welcome02:05 Demo Update for 'How Many Dudes'05:29 Analyzing Player Engagement and Discovery Queue06:51 Localization Strategies for the Chinese Market09:43 Cost Implications of Localization13:50 Reflections on Game Development Philosophy17:45 Execution vs. Concept in Game Design22:05 Iterative Development and Player Feedback29:47 Strategic Shifts in Game Development Approach30:54 Navigating Game Development Constraints33:27 Marketing Strategies and Market Gaps36:12 The Unpredictability of Entertainment Products40:17 Designing for Marketability and Engagement44:58 Adapting to Market Changes and Direct Sales49:15 The Role of Luck in Game Success54:21 Building a Resilient Game Development StudioTo stay up to date with all of our buttery goodness subscribe to the podcast on Apple podcasts (apple.co/1LxNEnk) or wherever you get your audio goodness. If you want to get more involved in the Butterscotch community, hop into our DISCORD server at discord.gg/bscotch and say hello! Submit questions at https://www.bscotch.net/podcast, disclose all of your secrets to podcast@bscotch.net, and send letters, gifts, and tasty treats to https://bit.ly/bscotchmailbox. Finally, if you'd like to support the show and buy some coffee FOR Butterscotch, head over to https://moneygrab.bscotch.net. ★ Support this podcast ★

Glass & Out
Wheeling Nailers Head Coach Ryan Papaioannou: Adapting in the ECHL, building deception and winning breeding development

Glass & Out

Play Episode Listen Later Feb 4, 2026 59:07


In episode 327 of the Glass and Out Podcast we welcome back Head Coach of the Wheeling Nailers, Ryan Papaioannou. With The Coaches Site and Glass and Out, one of the cool aspects is we get to touch base with coaches during the various chapters of their journey as they climb their way up the coaching ladder.  We first connected with Papaioannou during his time with the AJHL's Brooks Bandits. Under his leadership the Bandits captured seven AJHL Championships, one BCHL Championship (the Bandits joined the BCHL in advance of last season), and four National Championships at the Jr A level in Canada. At the beginning of this season Papaioannou was hired by the Pittsburgh Penguins organization to coach their ECHL affiliate in Wheeling. Currently, the Nailers are tied for 5th overall in the ECHL standings.  More than anything, Ryan's successful transition from tier 2 hockey to the professional ranks is a signal that leadership and the ability to develop a winning culture matters more than what level you've coached at.  Listen as he shares why being adaptable is crucial in the ECHL, the importance of deception in elite players, and why winning breeds development. Watch on YouTube: https://youtu.be/6iegOCLdYIc Download the TCS app: https://www.thecoachessite.com/app Learn more about our presenting sponsors: Biosteel: BioSteelTeams.com/Glassandout Hudl: hudl.com/tcs

Masters of Moments
How Omni Aligns Real Estate, Operations, and Guest Experience - Kurt Alexander - President of Omni Hotels & Resorts

Masters of Moments

Play Episode Listen Later Feb 4, 2026 71:49


In this episode of Masters of Moments, Jake Wurzak sits down with Kurt Alexander to unpack how Omni Hotels has built a differentiated hospitality platform by staying deeply rooted in ownership, operations, and long-term thinking. Kurt shares his unconventional path from accounting and investment banking into hotel operations, including the formative experience of working every frontline role at Omni early in his career. The conversation explores why hospitality is fundamentally about people, how ownership mindset shapes better decision making, and what it takes to build hotels that feel both authentic to their destination and durable over decades. They discuss: Kurt's transition from finance into hospitality and the lessons learned from working in frontline hotel roles Why Omni's owner-operator model drives better operational, design, and capital allocation decisions How in-house design, construction, and food and beverage teams create differentiated guest experiences The role of programming, amenities, and experiences in winning group, leisure, and business travel What Omni has learned from joint venture partnerships, challenging deals, and long-term capital stewardship Links: Kurt on LinkedIn - https://www.linkedin.com/in/wkurtalexander/ Omni Hotels & Resorts - https://www.omnihotels.com/ Connect & Invest with Jake: Follow Jake on X: ⁠https://x.com/JWurzak⁠ 1 on 1 coaching with Jake: ⁠https://www.jakewurzak.com/coaching⁠ Learn How to Invest with DoveHill: ⁠https://bit.ly/3yg8Pwo⁠ Topics: (00:00:00) - Intro (00:02:48) - From finance to frontline (00:05:37) - The calling of hospitality (00:09:49) - Omni's unique ownership model (00:17:07) - Design and construction innovations (00:27:18) - Programming for group and leisure travelers (00:34:08) - Competing in the hospitality industry (00:37:25) - Omni's brand identity and signature experiences (00:39:12) - Independent positioning of Omni Hotels (00:39:48) - Leveraging loyalty and unique experiences (00:41:04) - In-house culinary expertise and challenges (00:43:45) - Balancing culinary innovation and simplicity (00:45:46) - Adapting to market demands in f&b (00:52:06) - Creating a culture of ownership and excellence (00:55:52) - Incentivizing leadership and sales teams (00:58:33) - Omni's business model and financial strategy (01:02:15) - Lessons from jv partnerships (01:05:01) - Navigating challenges and learning from mistakes (01:07:31) - The importance of long-term thinking in hotel investments (01:09:37) - Favorite hotels and closing remarks

PPCChat Twitter Roundup
EP340 - Broken Pixels, Calm Leaders, and the PPC Comeback ft Amanda Farley

PPCChat Twitter Roundup

Play Episode Listen Later Feb 4, 2026 38:09


In this episode of PPC Live, Amanda Friedt (Farley), CMO of Aimclear, shares her journey in marketing, discussing the importance of integrated marketing, lessons learned from mistakes, and the evolving landscape of PPC. She emphasizes the significance of collaboration, data hygiene, and adapting to AI advancements while providing insights on leadership and handling mistakes in a team environment. Amanda encourages marketers to embrace testing and innovation as they navigate the challenges of 2026 and beyond.TakeawaysAmanda emphasizes the importance of integrated marketing.She shares her journey of overcoming imposter syndrome.Mistakes are opportunities for learning and growth.Collaboration is key in navigating PPC challenges.Data hygiene is crucial for effective marketing campaigns.AI is changing consumer behavior and marketing strategies.Leaders should create a safe space for discussing mistakes.Testing and innovation are essential for success in marketing.Understanding the consumer journey is vital for PPC success.Community support can significantly impact marketing efforts.Chapters00:00 Introduction and Background04:12 Lessons Learned from Mistakes07:10 Navigating Challenges in PPC Marketing10:09 The Importance of Data and Collaboration13:24 Adapting to AI and Changing Consumer Behavior16:04 Leadership and Handling Mistakes19:11 Advice for 2026 and Future Trends22:10 Final Thoughts and Fun Question38:00 Outro.mp3Find Amanda on on ⁠LinkedIn⁠ PPC Live The Podcast features weekly conversations with paid search experts sharing their experiences, challenges, and triumphs in the ever-changing digital marketing landscape.Upcoming: ⁠⁠⁠⁠⁠⁠⁠⁠⁠PPC Live event⁠⁠⁠⁠⁠⁠⁠⁠⁠, February 5th, 2026 at StrategiQ's London offices (where Dragon's Den was filmed!) featuring Google Ads script master Nils Rooijmans.Join our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Whatsapp groupSubscribe to our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Newsletter

Talk Dizzy To Me
Functional Neurological Disorder (FND) Explained: What It Is and How It Overlaps With Dizziness

Talk Dizzy To Me

Play Episode Listen Later Feb 4, 2026 57:43


Functional Neurological Disorder (FND) is often misunderstood... but it's real, common, AND treatable. In this episode of Talk Dizzy To Me, vestibular physical therapists Dr. Abbie Ross, PT, NCS and Dr. Carly Lochala, PT, NCS sit down with Dr. Julie Hershberg, PT, NCS to explain what FND is, why it's been minimized in healthcare, and how it overlaps with dizziness, migraine, dysautonomia/POTS, hypermobility/EDS, and vestibular disorders.They break down brain networks like the default mode network and salience network, discuss common clinical clues (variability, attention-related shifts), and explain how treatment often starts with nervous system regulation, trust-building, and whole-person care—not just exercises.If you've been told your symptoms are “all in your head,” this episode is for you.Guest: Dr. Julie Hershberg / Reactive PT Instagram: @reactiveptResources: FND resources hub, reactivept.com/FNDresourcesHosted by:

The Strengths Whisperer
Replay: Adaptability | How Different Coaches Stay Present in a World of Change

The Strengths Whisperer

Play Episode Listen Later Feb 4, 2026 41:52


In this replay episode, Brandon Miller discusses the strength of Adaptability with guests Kevin Lo and Jamal Cornelious. They share their personal journeys with strengths, how Adaptability plays a crucial role in their lives, and how it pairs with other strengths like Communication and Maximizer. The conversation explores the challenges and contrasts of Adaptability in professional environments, the expectations they set for themselves, and how they navigate change and uncertainty. Tune in now and discover how Adaptability can become your greatest advantage!

East Meets West Hunt
Ep. 476: No Time to Train? Building Consistency for Busy Hunters w/ Todd Bumgardner // Pack Mule Training Co.

East Meets West Hunt

Play Episode Listen Later Feb 3, 2026 106:04


Beau Martonik talks with Todd Bumgardner of Pack Mule Training Co. about staying consistent with training when time is limited. They discuss realistic fitness for busy hunters, how to adapt workouts to fit your life, and why consistency matters more than intensity. Todd shares practical advice for year-round training, redefining success, and building habits that support long-term health, performance, and enjoyment in the field. Topics: 00:00:00 — Intro 00:04:20 — Why “not enough time” is the biggest fitness lie 00:10:21 — Sleep, hydration, and the small habits that actually matter 00:17:38 — Todd's path to coaching and why hunters need a different approach 00:29:01 — Hunters vs. operators: why most programs don't fit real life 00:47:37 — How to stay consistent when life gets busy 01:03:50 — Setting yourself up for success instead of burnout 01:19:21 — Adapting training when plans fall apart 01:36:29 — Redefining success and enjoying the long game Resources: ⁠Follow Pack Mule Training Co on IG⁠ ⁠Follow Todd on IG⁠ ⁠Pack Mule Training Co website⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram:   ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@eastmeetswesthunt⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@beau.martonik⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook:   ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠East Meets West Outdoors⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Shop Hunting Gear and Apparel: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.eastmeetswesthunt.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ YouTube: Beau Martonik - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/channel/UCQJon93sYfu9HUMKpCMps3w⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Partner Discounts and Affiliate Links: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.eastmeetswesthunt.com/partners⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Amazon Influencer Page ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.amazon.com/shop/beau.martonik⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

Inspired Nonprofit Leadership
391: Are The Wrong Budget Priorities Holding Your Nonprofit Back? with Sarah Olivieri

Inspired Nonprofit Leadership

Play Episode Listen Later Feb 3, 2026 12:35


If your budget feels like a set of handcuffs instead of a helpful tool, this episode is for you. I break down why so many nonprofits get stuck prioritizing the bottom line instead of smart financial decisions—and how to reframe your budget as a living financial plan that helps you invest, adapt, and create more impact as new opportunities emerge. Episode Highlights 00:27 The Importance of Aligning Strategy and Operations 01:13 Common Budgeting Pitfalls 02:18 Reframing Your Budget as a Financial Plan 03:23 Prioritizing Spending for Maximum Impact 07:39 Adapting to New Opportunities Resource The Board Clarity Club A monthly membership for boards that provides training and live expert support to help your board have total clarity on how to be the best board possible. Learn More >> About Your Host Have you seen Casino Royale? That moment when Vespa slides in elegantly, opposite James, all charming smile, razor-sharp wit and mighty brainpower, and says, "I'm the money"? Well, your host, Sarah Olivieri has been likened to Vespa by one of her clients – not just because she's charming, beautiful and brainy– but because that bold statement "I'm the money" was, as it turned out, right ON the money. Sarah helps nonprofits transform their organizations from failing to thriving. And she's very, very good at it. She's brought nonprofits back from the brink of insolvency. She's averted major cash-flow crises, solved funding droughts, board conflicts and everything in between… and so she has literally become "the money" for many of the organizations she works with. As the former director of 3 nonprofits and founder of 5 for-profit businesses, she understands, deeply, the challenges and complexities facing organizations and she's created a framework, called The Impact Method®️, which can help you simplify operations, build aligned teams and make a bigger impact without getting overwhelmed or burning out – and Every. Single. One. Of her clients that have implemented her methodologies have achieved the most incredible results. Sarah is also a #1 international bestselling author, holds a BA from the University of Chicago with a focus on globalization and its effect on marginalized cultures, and a master's degree in Humanistic and Multicultural Education from SUNY New Paltz. Access additional training at www.pivotground.com/funding-secrets or apply for the THRiVE Program for personalized support at www.pivotground.com/application Be sure to subscribe to Inspired Nonprofit Leadership so that you don't miss a single episode, and while you're at it, won't you take a moment to write a short review and rate our show? It would be greatly appreciated! Let us know the topics or questions you would like to hear about in a future episode. You can do that and follow us on LinkedIn.

The Dairy Download
Ep. 108 - Adapting to the New Consumer

The Dairy Download

Play Episode Listen Later Feb 3, 2026 42:46


How is dairy positioned to adapt to the new consumer? This week on The Dairy Download, we have a special episode live from Dairy Forum 2026! We hear from two guests who are keyed in on evolving dairy consumer trends: Jennifer Galardi, Senior Policy Analyst for Restoring American Wellness, The Heritage Foundation DeVos Center ; and Donna Berry, Food Scientist, Editor and Consultant, Daily Dose of Dairy. Tune in now to learn more!Thank you to Novonesis for sponsoring this episode!If your company is interested in sponsoring a block of episodes of The Dairy Download, contact IDFA's Lindsay Gold at lgold@idfa.org.Like the show?Rate The Dairy Download on Apple Podcasts!

Satansplain
Satansplain #109 - Adapting "The Satanic Rituals" for Solitary Use; Enochian Keys

Satansplain

Play Episode Listen Later Feb 3, 2026 58:41


"The Satanic Rituals" (Anton LaVey's 1972 companion book to "The Satanic Bible"), presents many group rituals for Satanists. These rituals can however be adopted for solitary use as well. This episode also addresses some misconceptions regarding the role of ritual in Satanism, and answering listener questions about the Enochian Keys. Support Satansplain: https://satansplain.locals.com/support  00:00 - Intro 01:17 - Reminder: Satanism is a religion 04:52 - Yes, we have ritual 08:44 - Reminder: ALL religion ritual is "LARPing" 09:45 - Group rituals 12:45 - Using TSR for solitary rituals 15:34 - Breaking down "Pilgrims of the Age of Fire - The Statement of Shaitan" 22:08 - Shaving the head with water from the Zamzam? 27:32 - The ritual, continued 32:28 - The importance of going through the motions / more details 39:54 - Enochian Keys 50:33 - Using other languages for the translation 51:33 - More on Enochian Keys

Kings and Generals: History for our Future
3.187 Fall and Rise of China: Battle of Suixian–Zaoyang-Shatow

Kings and Generals: History for our Future

Play Episode Listen Later Feb 2, 2026 35:03


Last time we spoke about the battle of Nanchang. After securing Hainan and targeting Zhejiang–Jiangxi Railway corridors, Japan's 11th Army, backed by armor, air power, and riverine operations, sought a rapid, surgical seizure of Nanchang to sever eastern Chinese logistics and coerce Chongqing. China, reorganizing under Chiang Kai-shek, concentrated over 200,000 troops across 52 divisions in the Ninth and Third War Zones, with Xue Yue commanding the 9th War Zone in defense of Wuhan-Nanchang corridors. The fighting began with German-style, combined-arms river operations along the Xiushui and Gan rivers, including feints, river crossings, and heavy artillery, sometimes using poison gas. From March 20–23, Japanese forces established a beachhead and advanced into Fengxin, Shengmi, and later Nanchang, despite stiff Chinese resistance and bridges being destroyed. Chiang's strategic shift toward attrition pushed for broader offensives to disrupt railways and rear areas, though Chinese plans for a counteroffensive repeatedly stalled due to logistics and coordination issues. By early May, Japanese forces encircled and captured Nanchang, albeit at heavy cost, with Chinese casualties surpassing 43,000 dead and Japanese losses over 2,200 dead.    #187 The Battle of Suixian–Zaoyang-Shatow Welcome to the Fall and Rise of China Podcast, I am your dutiful host Craig Watson. But, before we start I want to also remind you this podcast is only made possible through the efforts of Kings and Generals over at Youtube. Perhaps you want to learn more about the history of Asia? Kings and Generals have an assortment of episodes on history of asia and much more  so go give them a look over on Youtube. So please subscribe to Kings and Generals over at Youtube and to continue helping us produce this content please check out www.patreon.com/kingsandgenerals. If you are still hungry for some more history related content, over on my channel, the Pacific War Channel where I cover the history of China and Japan from the 19th century until the end of the Pacific War. Having seized Wuhan in a brutal offensive the previous year, the Japanese sought not just to hold their ground but to solidify their grip on this vital hub. Wuhan, a bustling metropolis at the confluence of the Yangtze and Han Rivers, had become a linchpin in their strategy, a base from which they could project power across central China. Yet, the city was far from secure, Chinese troops in northern Hubei and southern Henan, perched above the mighty Yangtze, posed an unrelenting threat. To relieve the mounting pressure on their newfound stronghold, the Japanese high command orchestrated a bold offensive against the towns of Suixian and Zaoyang. They aimed to annihilate the main force of the Chinese 5th War Zone, a move that would crush the Nationalist resistance in the region and secure their flanks. This theater of war, freshly designated as the 5th War Zone after the grueling Battle of Wuhan, encompassed a vast expanse west of Shashi in the upper Yangtze basin. It stretched across northern Hubei, southern Henan, and the rugged Dabie Mountains in eastern Anhui, forming a strategic bulwark that guarded the eastern approaches to Sichuan, the very heartland of the Nationalist government's central institutions. Historian Rana Mitter in Forgotten Ally described this zone as "a gateway of immense importance, a natural fortress that could either serve as a launchpad for offensives against Japanese-held territories or a defensive redoubt protecting the rear areas of Sichuan and Shaanxi". The terrain itself was a defender's dream and an attacker's nightmare: to the east rose the imposing Dabie Mountains, their peaks cloaked in mist and folklore; the Tongbai Mountains sliced across the north like a jagged spine; the Jing Mountains guarded the west; the Yangtze River snaked southward, its waters a formidable barrier; the Dahong Mountains dominated the center, offering hidden valleys for ambushes; and the Han River (also known as the Xiang River) carved a north-south path through it all. Two critical transport arteries—the Hanyi Road linking Hankou to Yichang in Hubei, and the Xianghua Road connecting Xiangyang to Huayuan near Hankou—crisscrossed this landscape, integrating the war zone into a web of mobility. From here, Chinese forces could menace the vital Pinghan Railway, that iron lifeline running from Beiping (modern Beijing) to Hankou, while also threatening the Wuhan region itself. In retreat, it provided a sanctuary to shield the Nationalist heartlands. As military strategist Sun Tzu might have appreciated, this area had long been a magnet for generals, its contours shaping the fates of empires since ancient times. Despite the 5th War Zone's intricate troop deployments, marked by units of varying combat prowess and a glaring shortage of heavy weapons, the Chinese forces made masterful use of the terrain to harass their invaders. Drawing from accounts in Li Zongren's memoirs, he noted how these defenders, often outgunned but never outmaneuvered, turned hills into fortresses and rivers into moats. In early April 1939, as spring rains turned paths to mud, Chinese troops ramped up their disruptions along the southern stretches of the Pinghan Railway, striking from both eastern and western flanks with guerrilla precision. What truly rattled the Japanese garrison in Wuhan was the arrival of reinforcements: six full divisions redeployed to Zaoyang, bolstering the Chinese capacity to launch flanking assaults that could unravel Japanese supply lines. Alarmed by this buildup, the Japanese 11th Army, ensconced in the Wuhan area under the command of General Yasuji Okamura, a figure whose tactical acumen would later earn him notoriety in the Pacific War, devised a daring plan. They intended to plunge deep into the 5th War Zone, smashing the core of the Chinese forces and rendering them impotent, thereby neutralizing the northwestern threat to Wuhan once and for all. From April onward, the Japanese mobilized with meticulous preparation, amassing troops equipped with formidable artillery, rumbling tanks, and squadrons of aircraft that darkened the skies. Historians estimate they committed roughly three and a half divisions to this endeavor, as detailed in Edward J. Drea's In the Service of the Emperor: Essays on the Imperial Japanese Army. Employing a classic pincer movement, a two-flank encirclement coupled with a central breakthrough, they aimed for a swift, decisive strike to obliterate the main Chinese force in the narrow Suixian-Zaoyang corridor, squeezed between the Tongbai and Dahong Mountains. The offensive erupted in full fury on May 1, 1939, as Japanese columns surged forward like a tidal wave, their engines roaring and banners fluttering in the dust-choked air. General Li Zongren, the commander of the 5th War Zone, a man whose leadership had already shone in earlier campaigns like the defense of Tai'erzhuang in 1938, issued urgent orders to cease offensive actions against the Japanese and pivot to a defensive stance. Based on intelligence about the enemy's dispositions, Li orchestrated a comprehensive campaign structure, assigning precise defensive roles and battle plans to each unit. This was no haphazard scramble; it was a symphony of strategy, as Li himself recounted in his memoirs, emphasizing the need to exploit the terrain's natural advantages. While various Chinese war zones executed the "April Offensive" from late April to mid-May, actively harrying and containing Japanese forces, the 5th War Zone focused its energies on the southern segment of the Pinghan Railway, assaulting it from both sides in a bid to disrupt logistics. The main force of the 31st Army Group, under the command of Tang Enbo, a general known for his aggressive tactics and later criticized for corruption, shifted from elsewhere in Hubei to Zaoyang, fortifying the zone and posing a dire threat to the Japanese flanks and rear areas. To counter this peril and safeguard transportation along the Wuhan-Pinghan Railway, the Japanese, led by the formidable Okamura, unleashed their assault from the line stretching through Xinyang, Yingshan, and Zhongxiang. Mobilizing the 3rd, 13th, and 16th Divisions alongside the 2nd and 4th Cavalry Brigades, they charged toward the Suixian-Zaoyang region in western Hubei, intent on eradicating the Chinese main force and alleviating the siege-like pressure on Wuhan. In a masterful reorganization, Li Zongren divided his forces into two army groups, the left and right, plus a dedicated river defense army. His strategy was a blend of attrition and opportunism: harnessing the Tongbai and Dahong Mountains, clinging to key towns like lifelines, and grinding down the Japanese through prolonged warfare while biding time for a counterstroke. This approach echoed the Fabian tactics of ancient Rome, wearing the enemy thin before delivering the coup de grâce. The storm broke at dawn on May 1, when the main contingents of the Japanese 16th and 13th Divisions, bolstered by the 4th Cavalry Brigade from their bases in Zhongxiang and Jingshan, hurled themselves against the Chinese 37th and 180th Divisions of the Right Army Group. Supported by droning aircraft that strafed from above and tanks that churned the earth below, the Japanese advanced with mechanical precision. By May 4, they had shattered the defensive lines flanking Changshoudian, then surged along the east bank of the Xiang River toward Zaoyang in a massive offensive. Fierce combat raged through May 5, as described in Japanese war diaries compiled in Senshi Sōsho (the official Japanese war history series), where soldiers recounted the relentless Chinese resistance amid the smoke and clamor. The Japanese finally breached the defenses, turning their fury on the 122nd Division of the 41st Army. In a heroic stand, the 180th Division clung to Changshoudian, providing cover for the main force's retreat along the east-west Huangqi'an line. The 37th Division fell back to the Yaojiahe line, while elements of the 38th Division repositioned into Liushuigou. On May 6, the Japanese seized Changshoudian, punched through Huangqi'an, and drove northward, unleashing a devastating assault on the 122nd Division's positions near Wenjiamiao. Undeterred, Chinese defenders executed daring flanking maneuvers in the Fenglehe, Yaojiahe, Liushuihe, Shuanghe, and Zhangjiaji areas, turning the landscape into a labyrinth of ambushes. May 7 saw the Japanese pressing on, capturing Zhangjiaji and Shuanghe. By May 8, they assaulted Maozifan and Xinji, where ferocious battles erupted, soldiers clashing in hand-to-hand combat amid the ruins. By May 10, the Japanese had overrun Huyang Town and Xinye, advancing toward Tanghe and the northeastern fringes of Zaoyang. Yet, the Tanghe River front witnessed partial Chinese recoveries: remnants of the Right Army Group, alongside troops from east of the Xianghe, reclaimed Xinye. The 122nd and 180th Divisions withdrew north of Tanghe and Fancheng, while the 37th, 38th, and 132nd Divisions steadfastly held the east bank of the Xianghe River. Concurrently, the main force of the Japanese 3rd Division launched from Yingshan against the 84th and 13th Armies of the 11th Group Army in the Suixian sector. After a whirlwind of combat, the Chinese 84th Army retreated to the Taerwan position. On May 2, the 3rd Division targeted the Gaocheng position of the 13th Army within the 31st Group Army; the ensuing clashes in Taerwan and Gaocheng were a maelstrom of fire, with the Taerwan position exchanging hands multiple times like a deadly game of tug-of-war. By May 4, in a grim escalation, Japanese forces deployed poison gas, a violation of international norms that drew condemnation and is documented in Allied reports from the era, inflicting horrific casualties and compelling the Chinese to relinquish Gaocheng, which fell into enemy hands. On May 5, backed by aerial bombardments, tank charges, and artillery barrages, the Japanese renewed their onslaught along the Gaocheng River and the Lishan-Jiangjiahe line. By May 6, the beleaguered Chinese were forced back to the Tianhekou and Gaocheng line. Suixian succumbed on May 7. On May 8, the Japanese shattered the second line of the 84th Army, capturing Zaoyang and advancing on the Jiangtoudian position of the 85th Army. To evade encirclement, the defenders mounted a valiant resistance before withdrawing from Jiangtoudian; the 84th Army relocated to the Tanghe and Baihe areas, while the 39th Army embedded itself in the Dahongshan for guerrilla operations—a tactic that would bleed the Japanese through hit-and-run warfare, as noted in guerrilla warfare studies by Mao Zedong himself. By May 10, the bulk of the 31st Army Group maneuvered toward Tanghe, reaching north of Biyang by May 15. From Xinyang, Japanese forces struck at Tongbai on May 8; by May 10, elements from Zaoyang advanced to Zhangdian Town and Shangtun Town. In response, the 68th Army of the 1st War Zone dispatched the 143rd Division to defend Queshan and Minggang, and the 119th Division to hold Tongbai. After staunchly blocking the Japanese, they withdrew on May 11 to positions northwest and southwest of Tongbai, shielding the retreat of 5th War Zone units. The Japanese 4th Cavalry Brigade drove toward Tanghe, seizing Tanghe County on May 12. But the tide was turning. In a brilliant reversal, the Fifth War Zone commanded the 31st Army Group, in concert with the 2nd Army Group from the 1st War Zone, to advance from southwestern Henan. Their mission: encircle the bulk of Japanese forces on the Xiangdong Plain and deliver a crushing blow. The main force of the 33rd Army Group targeted Zaoyang, while other units pinned down Japanese rear guards in Zhongxiang. The Chinese counteroffensive erupted with swift successes, Tanghe County was recaptured on May 14, and Tongbai liberated on May 16, shattering the Japanese encirclement scheme. On May 19, after four grueling days of combat, Chinese forces mauled the retreating Japanese, reclaiming Zaoyang and leaving the fields strewn with enemy dead. The 39th Army of the Left Army Group dispersed into the mountains for guerrilla warfare, a shadowy campaign of sabotage and surprise. Forces of the Right Army Group east of the river, along with river defense units, conducted relentless raids on Japanese rears and supply lines over multiple days, sowing chaos before withdrawing to the west bank of the Xiang River on May 21. On May 22, they pressed toward Suixian, recapturing it on May 23. The Japanese, battered and depleted, retreated to their original garrisons in Zhongxiang and Yingshan, restoring the pre-war lines as the battle drew to a close. Throughout this clash, the Chinese held a marked superiority in manpower and coordination, though their deployments lacked full flexibility, briefly placing them on the defensive. After protracted, blood-soaked fighting, they restored the original equilibrium. Despite grievous losses, the Chinese thwarted the Japanese encirclement and exacted a heavy toll, reports from the time, corroborated by Japanese records in Senshi Sōsho, indicate over 13,000 Japanese killed or wounded, with more than 5,000 corpses abandoned on the battlefield. This fulfilled the strategic goal of containing and eroding Japanese strength. Chinese casualties surpassed 25,000, a testament to the ferocity of the struggle. The 5th War Zone seized the initiative in advances and retreats, deftly shifting to outer lines and maintaining positional advantages. As Japanese forces withdrew, Chinese pursuers harried and obstructed them, yielding substantial victories. The Battle of Suizao spanned less than three weeks. The Japanese main force pierced defenses on the east bank of the Han River, advancing to encircle one flank as planned. However, the other two formations met fierce opposition near Suixian and northward, stalling their progress. Adapting to the battlefield's ebb and flow, the Fifth War Zone transformed its tactics: the main force escaped encirclement, maneuvered to outer lines for offensives, and exploited terrain to hammer the Japanese. The pivotal order to flip from defense to offense doomed the encirclement; with the counterattack triumphant, the Japanese declined to hold and retreated. The Chinese pursued with unyielding vigor. By May 24, they had reclaimed Zaoyang, Tongbai, and other locales. Save for Suixian County, the Japanese had fallen back to pre-war positions, reinstating the regional status quo. Thus, the battle concluded, a chapter of resilience etched into the chronicles of China's defiance. In the sweltering heat of southern China, where the humid air clung to every breath like a persistent fog, the Japanese General Staff basked in what they called a triumphant offensive and defensive campaign in Guangdong. But victory, as history so often teaches, is a double-edged sword. By early 1939, the strain was palpable. Their secret supply line snaking from the British colony of Hong Kong to the Chinese mainland was under constant disruption, raids by shadowy guerrilla bands, opportunistic smugglers, and the sheer unpredictability of wartime logistics turning what should have been a lifeline into a leaky sieve. Blockading the entire coastline? A pipe dream, given the vast, jagged shores of Guangdong, dotted with hidden coves and fishing villages that had evaded imperial edicts for centuries. Yet, the General Staff's priorities were unyielding, laser-focused on strangling the Nationalist capital of Chongqing through a relentless blockade. This meant the 21st Army, that workhorse of the Japanese invasion force, had to stay in the fight—no rest for the weary. Drawing from historical records like the Senshi Sōsho (War History Series) compiled by Japan's National Institute for Defense Studies, we know that after the 21st Army reported severing what they dubbed the "secret transport line" at Xinhui, a gritty, hard-fought skirmish that left the local landscape scarred with craters and abandoned supply crates, the General Staff circled back to the idea of a full coastal blockade. It was a classic case of military opportunism: staff officers, poring over maps in dimly lit war rooms in Tokyo, suddenly "discovered" Shantou as a major port. Not just any port, mind you, but a bustling hub tied to the heartstrings of Guangdong's overseas Chinese communities. Shantou and nearby Chao'an weren't mere dots on a map; they were the ancestral hometowns of countless Chaoshan people who had ventured abroad to Southeast Asia, sending back remittances that flowed like lifeblood into the region. Historical economic studies, such as those in The Overseas Chinese in the People's Republic of China by Stephen Fitzgerald, highlight how these funds from the Chaoshan diaspora, often funneled through family networks in places like Singapore and Thailand, were substantial, indirectly fueling China's war effort by sustaining local economies and even purchasing arms on the black market. The Chao-Shao Highway, that dusty artery running near Shantou, was pinpointed as a critical vein connecting Hong Kong's ports to the mainland's interior. So, in early June 1939, the die was cast: Army Order No. 310 thundered from headquarters, commanding the 21st Army to seize Shantou. The Chief of the General Staff himself provided the strategic blueprint, a personal touch that underscored the operation's gravity. The Army Department christened the Chaoshan push "Operation Hua," a nod perhaps to the flowery illusions of easy conquest, while instructing the Navy Department to tag along for the ride. In naval parlance, it became "Operation J," a cryptic label that masked the sheer scale unfolding. Under the Headquarters' watchful eye, what started as a modest blockade morphed into a massive amphibious assault, conjured seemingly out of thin air like a magician's trick, but one with deadly props. The 5th Fleet's orders mobilized an impressive lineup: the 9th Squadron for heavy hitting, the 5th Mine Boat Squadron to clear watery hazards, the 12th and 21st Sweeper Squadrons sweeping for mines like diligent janitors of the sea, the 45th Destroyer Squadron adding destroyer muscle, and air power from the 3rd Combined Air Group (boasting 24 land-based attack aircraft and 9 reconnaissance planes that could spot a fishing boat from miles away). Then there was the Chiyoda Air Group with its 9 reconnaissance aircraft, the Guangdong Air Group contributing a quirky airship and one more recon plane, the 9th Special Landing Squadron from Sasebo trained for beach assaults, and a flotilla of special ships for logistics. On the ground, the 21st Army threw in the 132nd Brigade from the 104th Division, beefed up with the 76th Infantry Battalion, two mountain artillery battalions for lobbing shells over rugged terrain, two engineer battalions to bridge rivers and clear paths, a light armored vehicle platoon rumbling with mechanized menace, and a river-crossing supplies company to keep the troops fed and armed. All under the command of Brigade Commander Juro Goto, a stern officer whose tactical acumen was forged in earlier Manchurian campaigns. The convoy's size demanded rehearsals; the 132nd Brigade trained for boat transfers at Magong in the Penghu Islands, practicing the precarious dance of loading men and gear onto rocking vessels under simulated fire. Secrecy shrouded the whole affair, many officers and soldiers, boarding ships in the dead of night, whispered among themselves that they were finally heading home to Japan, a cruel ruse to maintain operational security. For extra punch, the 21st Army tacked on the 31st Air Squadron for air support, their planes droning like angry hornets ready to sting. This overkill didn't sit well with everyone. Lieutenant General Ando Rikichi, the pragmatic commander overseeing Japanese forces in the region, must have fumed in his Guangzhou headquarters. His intelligence staff, drawing from intercepted radio chatter and local spies as noted in postwar analyses like The Japanese Army in World War II by Gordon L. Rottman, reported that the Chongqing forces in Chaozhou were laughably thin: just the 9th Independent Brigade, a couple of security regiments, and ragtag "self-defense groups" of armed civilians. Why unleash such a sledgehammer on a fly? The mobilization's magnitude even forced a reshuffling of defenses around Guangzhou, pulling resources from the 12th Army's front lines and overburdening the already stretched 18th Division. It was bureaucratic overreach at its finest, a testament to the Imperial Staff's penchant for grand gestures over tactical efficiency. Meanwhile, on the Nationalist side, the winds of war carried whispers of impending doom. The National Revolutionary Army's war histories, such as those compiled in the Zhongguo Kangri Zhanzheng Shi (History of China's War of Resistance Against Japan), note that Chiang Kai-shek's Military Commission had snagged intelligence as early as February 1939 about Japan's plans for a large-scale invasion of Shantou. The efficiency of the Military Command's Second Bureau and the Military Intelligence Bureau was nothing short of astonishing, networks of agents, double agents, and radio intercepts piercing the veil of Japanese secrecy. Even as the convoy slipped out of Penghu, a detailed report outlining operational orders landed on Commander Zhang Fakui's desk, the ink still fresh. Zhang, a battle-hardened strategist whose career spanned the Northern Expedition and beyond , had four months to prepare for what would be dubbed the decisive battle of Chaoshan. Yet, in a move that baffled some contemporaries, he chose not to fortify and defend it tooth and nail. After the Fourth War Zone submitted its opinions, likely heated debates in smoke-filled command posts, Chiang Kai-shek greenlit the plan. By March, the Military Commission issued its strategic policy: when the enemy hit Chaoshan, a sliver of regular troops would team up with civilian armed forces for mobile and guerrilla warfare, grinding down the invaders like sandpaper on steel. The orders specified guerrilla zones in Chaozhou, Jiaxing, and Huizhou, unifying local militias under a banner of "extensive guerrilla warfare" to coordinate with regular army maneuvers, gradually eroding the Japanese thrust. In essence, the 4th War Zone wasn't tasked with holding Chao'an and Shantou at all costs; instead, they'd strike hard during the landing, then let guerrillas harry the occupiers post-capture. It was a doctrine of attrition in a "confined battlefield," honing skills through maneuver and ambush. Remarkably, the fall of these cities was preordained by the Military Commission three months before the Japanese even issued their orders, a strategic feint that echoed ancient Sun Tzu tactics of yielding ground to preserve strength. To execute this, the 4th War Zone birthed the Chao-Jia-Hui Guerrilla Command after meticulous preparation, with General Zou Hong, head of Guangdong's Security Bureau and a no-nonsense administrator known for his anti-smuggling campaigns, taking the helm. In just three months, Zhang Fakui scraped together the Independent 9th Brigade, the 2nd, 4th, and 5th Guangdong Provincial Security Regiments, and the Security Training Regiment. Even with the 9th Army Group lurking nearby, he handed the reins of the Chao-Shan operation to the 12th Army Group's planners. Their March guidelines sketched three lines of resistance from the coast to the mountains, a staged withdrawal that allowed frontline defenders to melt away like ghosts. This blueprint mirrored Chiang Kai-shek's post-Wuhan reassessment, where the loss of that key city in 1938 prompted a shift to protracted warfare. A Xinhua News Agency columnist later summed it up scathingly: "The Chongqing government, having lost its will to resist, colludes with the Japanese and seeks to eliminate the Communists, adopting a policy of passive resistance." This narrative, propagated by Communist sources, dogged Chiang and the National Revolutionary Army for decades, painting them as defeatists even as they bled the Japanese dry through attrition. February 1939 saw Commander Zhang kicking off a reorganization of the 12th Army Group, transforming it from a patchwork force into something resembling a modern army. He could have hunkered down, assigning troops to a desperate defense of Chaoshan, but that would have handed the initiative to the overcautious Japanese General Staff, whose activism often bordered on paranoia. Zhang, with the wisdom of a seasoned general who had navigated the treacherous politics of pre-war China, weighed the scales carefully. His vision? Forge the 12th Army Group into a nimble field army, not squander tens of thousands on a secondary port. Japan's naval and air dominance—evident in the devastation of Shanghai in 1937, meant Guangdong's forces could be pulverized in Shantou just as easily. Losing Chaozhou and Shantou? Acceptable, if it preserved core strength for the long haul. Post-Xinhui, Zhang doubled down on resistance, channeling efforts into live-fire exercises for the 12th Army, turning green recruits into battle-ready soldiers amid the Guangdong hills. The war's trajectory after 1939 would vindicate him: his forces became pivotal in later counteroffensives, proving that a living army trumped dead cities. Opting out of a static defense, Zhang pivoted to guerrilla warfare to bleed the Japanese while clutching strategic initiative. He ordered local governments to whip up coastal guerrilla forces from Chao'an to Huizhou—melding militias, national guards, police, and private armed groups into official folds. These weren't elite shock troops, but in wartime's chaos, they controlled locales effectively, disrupting supply lines and gathering intel. For surprises, he unleashed two mobile units: the 9th Independent Brigade and the 20th Independent Brigade. Formed fresh after the War of Resistance erupted, these brigades shone for their efficiency within the cumbersome Guangdong Army structure. Division-level units were too bulky for spotty communications, so Yu Hanmou's command birthed these independent outfits, staffed with crack officers. The 9th, packing direct-fire artillery for punch, and the 20th, dubbed semi-mechanized for its truck-borne speed, prowled the Chaoshan–Huizhou coast from 1939. Zhang retained their three-regiment setup, naming Hua Zhenzhong and Zhang Shou as commanders, granting them autonomy to command in the field like roving wolves. As the 9th Independent Brigade shifted to Shantou, its 627th Regiment was still reorganizing in Heyuan, a logistical hiccup amid the scramble. Hua Zhenzhong, a commander noted for his tactical flexibility in regional annals, deployed the 625th Regiment and 5th Security Regiment along the coast, with the 626th as reserve in Chao'an. Though the Fourth War Zone had written off Chaoshan, Zhang yearned to showcase Guangdong grit before the pullback. Dawn broke on June 21, 1939, at 4:30 a.m., with Japanese reconnaissance planes slicing through the fog over Shantou, Anbu, and Nanbeigang, ghostly silhouettes against the gray sky. By 5:30, the mist lifted, revealing a nightmare armada: over 40 destroyers and 70–80 landing craft churning toward the coast on multiple vectors, their hulls cutting the waves like knives. The 626th Regiment's 3rd Battalion at Donghushan met the first wave with a hail of fire from six light machine guns, repelling the initial boats in a frenzy of splashes and shouts. But the brigade's long-range guns couldn't stem the tide; Hua focused on key chokepoints, aiming to bloody the invaders rather than obliterate them. By morning, the 3rd Battalion of the 625th Regiment charged into Shantou City, joined by the local police corps digging in amid urban sprawl. Combat raged at Xinjin Port and the airport's fringes, where Nationalist troops traded shots with advancing Japanese under the absent shadow of a Chinese navy. Japanese naval guns, massed offshore, pounded the outskirts like thunder gods in fury. By 2:00 a.m. on the 22nd, Shantou crumpled as defenders' ammo ran dry, the city falling in a haze of smoke and echoes. Before the loss, Hua had positioned the 1st Battalion of the 5th Security Regiment at Anbu, guarding the road to Chao'an. Local lore, preserved in oral histories collected by the Chaozhou Historical Society, recalls Battalion Commander Du Ruo leading from the front, rifle in hand, but Japanese barrages, bolstered by superior firepower—forced a retreat. Post-capture, Tokyo's forces paused to consolidate, unleashing massacres on fleeing civilians in the outskirts. A flotilla of civilian boats, intercepted at sea, became a grim training ground for bayonet drills, a barbarity echoed in survivor testimonies compiled in The Rape of Nanking and Beyond extensions to Guangdong atrocities. With Shantou gone, Hua pivoted to flank defense, orchestrating night raids on Japanese positions around Anbu and Meixi. On June 24th, Major Du Ruo spearheaded an assault into Anbu but fell gravely wounded amid the chaos. Later, the 2nd Battalion of the 626th overran spots near Meixi. A Japanese sea-flanking maneuver targeted Anbu, but Nationalists held at Liulong, sparking nocturnal clashes, grenade volleys, bayonet charges, and hand-to-hand brawls that drained both sides like a slow bleed. June 26th saw the 132nd Brigade lumber toward Chao'an. Hua weighed options: all-out assault or guerrilla fade? He chose to dig in on the outskirts, reserving two companies of the 625th and a special ops battalion in the city. The 27th brought a day-long Japanese onslaught, culminating in Chao'an's fall after fierce rear-guard actions by the 9th Independent Brigade. Evacuations preceded the collapse, with Japanese propaganda banners fluttering falsely, claiming Nationalists had abandoned defense. Yet Hua's call preserved his brigade for future fights; the Japanese claimed an empty prize. I would like to take this time to remind you all that this podcast is only made possible through the efforts of Kings and Generals over at Youtube. Please go subscribe to Kings and Generals over at Youtube and to continue helping us produce this content please check out www.patreon.com/kingsandgenerals. If you are still hungry after that, give my personal channel a look over at The Pacific War Channel at Youtube, it would mean a lot to me. The Japanese operations had yet again plugged up supply leaks into Nationalist China. The fall of Suixian, Zaoyang and Shantou were heavy losses for the Chinese war effort. However the Chinese were also able to exact heavy casualties on the invaders and thwarted their encirclement attempts. China was still in the fight for her life.

Ending Human Trafficking Podcast
364: Are Our Systems Adapting as Fast as Traffickers Are?

Ending Human Trafficking Podcast

Play Episode Listen Later Feb 2, 2026 31:46


Dr. Kari Johnstone joins Dr. Sandie Morgan as they discuss how traffickers adapt fast, moving money, victims, and exploitation through digital systems most of us interact with every day, examining whether our institutions are adapting fast enough to protect victims without them risking everything to testify.Dr. Kari JohnstoneDr. Kari Johnstone is the OSCE Special Representative and Co-ordinator for Combating Trafficking in Human Beings, representing the Organization for Security and Co-operation in Europe at the political level on human trafficking issues and coordinating anti-trafficking efforts across the OSCE region. Before joining the OSCE, Dr. Johnstone spent nearly a decade (2014-2023) as Senior Official, Acting Director, and Principal Deputy Director of the U.S. Department of State's Office to Monitor and Combat Trafficking in Persons (J/TIP), where she advised senior leadership on global trafficking policy and programming and oversaw the annual Trafficking in Persons Report. Her extensive U.S. government service also includes senior roles in the Bureau of Democracy, Human Rights, and Labor. Dr. Johnstone holds a B.A. from the University of Michigan and a Ph.D. in Political Science from the University of California, Berkeley.Key PointsThe OSCE survey revealed a 17-fold increase in forced criminality cases over five years across the 57 member states, making it the fastest growing form of human trafficking globally.Forced scamming, which originated in Southeast Asia, is now being exported to other regions as criminals adopt this lucrative business model that exploits victims with brutal tactics to defraud others.Technology and artificial intelligence present both challenges and opportunities in combating trafficking, allowing law enforcement to process data more quickly to find victims and perpetrators while also being misused by traffickers for recruitment and exploitation.Financial intelligence and following the money can supplement or even replace victim testimony in prosecutions, reducing the burden on survivors and providing effective pathways to convict traffickers.The non-punishment principle remains woefully inadequate in practice worldwide, with victims often arrested, prosecuted, and convicted for crimes directly related to their trafficking experience, creating lifelong consequences that prevent access to housing, employment, and stability.The United States leads globally on criminal record relief for trafficking survivors, with 48-49 states having vacature or expungement laws and new federal legislation (Trafficking Survivor Relief Act) awaiting presidential signature, though much work remains worldwide.Victim assistance must be unlinked from the criminal justice process, allowing survivors to receive care and services first before deciding whether to cooperate with law enforcement, which actually increases the likelihood they will come forward and participate.The demographics of trafficking victims are shifting beyond stereotypes, with forced scamming targeting educated individuals with IT and language skills, while forced criminality increasingly exploits younger children, including those under age 10, for drug-related crimes and violence.ResourcesOrganization for Security and Co-operation in Europe (OSCE)OSCE Office of the Special Representative and Co-ordinator for Combating Trafficking in Human BeingsProtocol to Prevent, Suppress and Punish Trafficking in Persons (UN Palermo Protocol)UN Global Plan of Action to Combat Trafficking in PersonsU.S. State Department Office to Monitor and Combat Trafficking in PersonsTrafficking in Persons ReportTrafficking Survivors Relief ActEnding Human Trafficking PodcastTranscriptTranscript will be here when available.

Work @ Home RockStar Podcast
WHR 3.261 : David Feinman - The Journey from Humble Beginnings to Business Success

Work @ Home RockStar Podcast

Play Episode Listen Later Feb 2, 2026 39:36


Episode Summary In this episode of the Work at Home Rockstar Podcast, Tim Melanson chats with David Feinman, Co-Founder and CEO of Viral Ideas, about building a business from humble beginnings and scaling it through persistence, leadership, and smart hiring. David shares how he started with just $200 and a single client, the hard lessons learned through near-collapse moments, and what it really takes to grow a team-driven company without becoming the bottleneck. This conversation digs deep into entrepreneurship realities, from finding your first customer to developing strong leadership skills, empowering employees, and using mentorship to unlock the next stage of growth. Who is David Feinman? David Feinman is the Co-Founder and CEO of Viral Ideas, a video editing company that helps brands and agencies scale their video content across social platforms. Over the past decade, David has grown Viral Ideas from a scrappy startup into a company with 45 employees and hundreds of clients, delivering tens of thousands of videos each year. Connect with David Feinman Website: https://www.viralideamarketing.com Instagram: https://www.instagram.com/davidfeinman LinkedIn: https://www.linkedin.com/in/david-feinman-7a069255/ Host Contact Details Website: https://workathomerockstar.com Facebook: https://www.facebook.com/workathomerockstar Instagram: https://www.instagram.com/workathomerockstar LinkedIn: https://www.linkedin.com/in/timmelanson YouTube: https://www.youtube.com/@WorkAtHomeRockStarPodcast X / Twitter: https://twitter.com/workathomestar Timestamps 00:00 — Introduction to the Work at Home Rockstar Podcast 00:27 — David Feinman's Entrepreneurial Journey 01:12 — The Importance of Starting and Adapting 02:44 — Overcoming Business Bottlenecks 08:56 — The Power of Perseverance 14:42 — Hiring and Building a Team 19:29 — The Role of a CEO 20:37 — Empowering Employees and Leadership Growth 20:55 — The CEO's Role and Responsibilities 21:18 — Overcoming Leadership Challenges 26:37 — The Importance of Mentorship and Coaching 32:54 — Business Growth and Hiring Practices 37:59 — Conclusion and Final Thoughts

Stop Scrolling, Start Scaling Podcast
249. Adapting to the 2026 Social Media Consumer (Social Bite)

Stop Scrolling, Start Scaling Podcast

Play Episode Listen Later Feb 2, 2026 12:47


Social media didn't suddenly stop working; your audience just evolved faster than your strategy did. Consumers are watching more, engaging less, and deciding faster than ever, and most brands are completely missing it. In this episode, Emma breaks down the major shifts happening in consumer behavior on social media and what brands must do to adapt. Today's buyer is more informed, more skeptical, and far quieter than ever before – consuming more content, engaging less publicly, and making decisions long before reaching out. This episode will explain why likes and followers are no longer the metrics that matter, how "creeper behavior" is actually a positive signal, and why emotionally driven, story-based content converts faster in 2026. You'll also learn the biggest mistakes brands are making (such as treating social media as a launch-only channel) and how to fix them. This episode is your reminder that social media is a long-term trust engine, and brands willing to adapt now will win later. Listen in as Emma explains: Why the 2026 buyer consumes more content yet disengages faster The two metrics that matter far more than virality or volume How to shift your social media approach from "quick ROI channel" to "long-term trust engine"   And so much more!   Connect with Ninety Five Media: Check out our website: ninetyfivemedia.co Follow us on Instagram: instagram.com/ninety.five.media  Grow your brand's social media presence with us:  Tell us about your business goals and explore how our social media management services can help you reach them! ninetyfivemedia.co/stop-scrolling-start-scaling-inquiry

Contractor Cuts
The Contractor Operating System Step 3 (Part 1): Every Core Process Needed to Grow Your Company

Contractor Cuts

Play Episode Listen Later Feb 2, 2026 41:49 Transcription Available


We outline level one and level two of core process documentation, focusing on a 10-step project flow, weekly management, financial management, and product quality. The aim is to scale beyond owner-only decisions, improve cash flow, and deliver a better client experience.• Seven-step operating system context and focus on step three• Ten-step project flow from intake to final invoice• Desk estimate to on-site estimate handoff and role clarity• Adapting processes by niche while pressure-testing for scale• Weekly management rhythm and calendar ownership• Financial management processes versus financial metrics• Projections, AR, and cash flow planning to avoid debt shuffling• Product quality: crew onboarding, benchmarks, and client experience• Benchmark walks to prevent rework and drive clear choices• How processes interlock to create consistency and profitIf you want to have one of those intro calls, go to contractorcuts.com or ProStruct360.com and go to contact usHave a question or an idea to improve the podcast? Email us at team@prostruct360.com Want to learn more about our software or coaching? Visit our website at ProStruct360.com

Ask Drone U
EDL 019: Turning Passion to Profession: Running successful drone business in Hawaii, with Gabo Hanohano

Ask Drone U

Play Episode Listen Later Feb 1, 2026


In this episode, Gabe Hanohano takes us on his inspiring journey of building a successful drone business in Hawaii. Starting with a deep-rooted passion for photography and technology, Gabe navigates the intricate world of drones, sharing the highs and lows of his entrepreneurial path. He underscores the critical role of networking in Hawaii's relationship-driven market and the importance of adapting business strategies, including rebranding for better market positioning. Gabe also delves into the power of leveraging technology, such as AI, to enhance business operations and the necessity of a strong online presence for client attraction. His story is a testament to the value of continuous learning, resilience, and maintaining relationships in a rapidly evolving industry. Aspiring drone entrepreneurs will find Gabe's insights on exploring new opportunities, the potential of NSF grants for research and development, and the importance of staying grounded in reality both enlightening and motivating. Join us as Gabe shares his wisdom on thriving in the drone industry amidst challenges and uncertainties. Want to Make Money Flying Drones? DroneU gives you the blueprint to start and grow a real drone business: FAA Part 107 prep 40+ courses on flight skills, real estate, mapping, and business Pricing guides, client acquisition, and weekly coaching Supportive community of top-tier drone pros Start here https://www.thedroneu.com Know someone ready to take the leap? Share this episode with them !! Stuck between a safe job and chasing your drone dream? Download our FREE Drone Pilot Starter Kit   Includes: FAA checklist, pricing template, and plug-and-play proposal to help you land your first client with confidence.  https://learn.thedroneu.com/bundles/drone-pilot-starter-kit  Timestamps [02:49] - Gabe's Journey into Drones [05:59] - First Paid Jobs and Learning Experiences [09:06] - Building a Drone Business in Hawaii [12:04] - The Importance of Networking and Relationships [15:04] - Adapting Business Strategies and Name Changes [18:04] - Navigating the First Year of Business [20:46] - Acquiring Contracts and Client Relationships [23:54] - Leveraging Technology for Business Growth [26:58] - SEO and Online Presence [30:06] - The Role of AI in Business Development [33:01] - Long-Term Business Strategies and Mindset [36:07] - Future of Drone Business and Industry Changes [39:21] - Navigating Uncertainties in the Drone Industry [42:05] - Adapting to Market Changes and Client Needs [44:50] - Exploring New Opportunities and Innovations [46:26] - Reality Checks for Drone Business Owners [51:09] - Resilience and Perseverance in Challenging Times [54:50] - Networking and Collaboration for Growth [01:00:49] - Research and Development: NSF Grant Insights [01:06:08]  - Future Aspirations and Scaling the Business [01:08:55] - Lessons Learned and Best Practices

Lenny's Podcast: Product | Growth | Career
Marc Andreessen: The real AI boom hasn't even started yet

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Jan 29, 2026 104:35


Marc Andreessen is a founder, investor, and co-founder of Netscape, as well as co-founder of the venture capital firm Andreessen Horowitz (a16z). In this conversation, we dig into why we're living through a unique and one of the most incredible times in history, and what comes next.We discuss:1. Why AI is arriving at the perfect moment to counter demographic collapse and declining productivity2. How Marc has raised his 10-year-old kid to thrive in an AI-driven world3. What's actually going to happen with AI and jobs (spoiler: he thinks the panic is “totally off base”)4. The “Mexican standoff” that's happening between product managers, designers, and engineers5. Why you should still learn to code (even with AI)6. How to develop an “E-shaped” career that combines multiple skills, with AI as a force multiplier7. The career advice he keeps coming back to (“Don't be fungible”)8. How AI can democratize one-on-one tutoring, potentially transforming education9. His media diet: X and old books, nothing in between—Brought to you by:DX—The developer intelligence platform designed by leading researchersBrex—The banking solution for startupsDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Episode transcript: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Marc Andreessen:• X: https://x.com/pmarca• Substack: https://pmarca.substack.com• Andreessen Horowitz's website: https://a16z.com• Andreessen Horowitz's YouTube channel: https://www.youtube.com/@a16z—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Marc Andreessen(04:27) The historic moment we're living in(06:52) The impact of AI on society(11:14) AI's role in education and parenting(22:15) The future of jobs in an AI-driven world(30:15) Marc's past predictions(35:35) The Mexican standoff of tech roles(39:28) Adapting to changing job tasks(42:15) The shift to scripting languages(44:50) The importance of understanding code(51:37) The value of design in the AI era(53:30) The T-shaped skill strategy(01:02:05) AI's impact on founders and companies(01:05:58) The concept of one-person billion-dollar companies(01:08:33) Debating AI moats and market dynamics(01:14:39) The rapid evolution of AI models(01:18:05) Indeterminate optimism in venture capital(01:22:17) The concept of AGI and its implications(01:30:00) Marc's media diet(01:36:18) Favorite movies and AI voice technology(01:39:24) Marc's product diet(01:43:16) Closing thoughts and recommendations—Referenced:• Linus Torvalds on LinkedIn: https://www.linkedin.com/in/linustorvalds• The philosopher's stone: https://en.wikipedia.org/wiki/Philosopher%27s_stone• Alexander the Great: https://en.wikipedia.org/wiki/Alexander_the_Great• Aristotle: https://en.wikipedia.org/wiki/Aristotle• Bloom's 2 sigma problem: https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem• Alpha School: https://alpha.school• In Tech We Trust? A Debate with Peter Thiel and Marc Andreessen: https://a16z.com/in-tech-we-trust-a-debate-with-peter-thiel-and-marc-andreessen• John Woo: https://en.wikipedia.org/wiki/John_Woo• Assembly: https://en.wikipedia.org/wiki/Assembly_language• C programming language: https://en.wikipedia.org/wiki/C_(programming_language)• Python: https://www.python.org• Netscape: https://en.wikipedia.org/wiki/Netscape• Perl: https://www.perl.org• Scott Adams: https://en.wikipedia.org/wiki/Scott_Adams• Larry Summers's website: https://larrysummers.com• Nano Banana: https://gemini.google/overview/image-generation• Bitcoin: https://bitcoin.org• Ethereum: https://ethereum.org• Satoshi Nakamoto: https://en.wikipedia.org/wiki/Satoshi_Nakamoto• Inside ChatGPT: The fastest-growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann• Inside Google's AI turnaround: The rise of AI Mode, strategy behind AI Overviews, and their vision for AI-powered search | Robby Stein (VP of Product, Google Search): https://www.lennysnewsletter.com/p/how-google-built-ai-mode-in-under-a-year• DeepSeek: https://www.deepseek.com• Cowork: https://support.claude.com/en/articles/13345190-getting-started-with-cowork• Definite vs. indefinite thinking: Notes from Zero to One by Peter Thiel: https://boxkitemachine.net/posts/zero-to-one-peter-thiel-definite-vs-indefinite-thinking• Henry Ford: https://www.thehenryford.org/explore/stories-of-innovation/visionaries/henry-ford• Lex Fridman Podcast: https://lexfridman.com/podcast• $46B of hard truths from Ben Horowitz: Why founders fail and why you need to run toward fear (a16z co-founder): https://www.lennysnewsletter.com/p/46b-of-hard-truths-from-ben-horowitz• Eddington: https://www.imdb.com/title/tt31176520• Joaquin Phoenix: https://en.wikipedia.org/wiki/Joaquin_Phoenix• Pedro Pascal: https://en.wikipedia.org/wiki/Pedro_Pascal• George Floyd: https://en.wikipedia.org/wiki/George_Floyd• Replit: https://replit.com• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Grok Bad Rudi: https://grok.com/badrudi• Wispr Flow: https://wisprflow.ai• Star Trek: The Next Generation: https://www.imdb.com/title/tt0092455• Star Trek: Starfleet Academy: https://www.imdb.com/title/tt8622160• a16z: The Power Brokers: https://www.notboring.co/p/a16z-the-power-brokers—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Ones Ready
Ep 554: AFSW Attribute - Trainability

Ones Ready

Play Episode Listen Later Jan 28, 2026 34:27


Send us a textThis episode tackles one of the most decisive attributes in Air Force Special Warfare selection: trainability. Aaron, Trent, and Peaches break down why prior experience, certifications, and ego mean nothing if you can't take feedback and apply it immediately. Trainability isn't about showing up perfect—it's about learning fast, adapting under pressure, and improving visibly rep to rep. From instructor mind games and deliberate task changes to debrief culture, medical evolution, radios, and real pipeline examples, this episode explains exactly how cadre spot coachable candidates—and why untrainable ones flame out. If you think “I already know” is a strength, this episode is your warning.⏱️ Timestamps: 00:00 Ones Ready intro and why trainability matters 02:10 What trainability actually means in selection 04:50 Ego, certifications, and false confidence 07:20 Instructor feedback tests explained 10:30 Debriefs and visible improvement 13:40 Trainability in medicine, CAS, and radios 17:00 Adapting to new tasks fast 20:30 No-go behaviors instructors spot immediately 23:50 Trainability over an entire career 27:30 White-belt mindset and humility 31:00 Final charge: value the process, not your ego