Podcast appearances and mentions of Michael Jordan

American basketball player and businessman

  • 13,252PODCASTS
  • 28,188EPISODES
  • 52mAVG DURATION
  • 5DAILY NEW EPISODES
  • Feb 16, 2026LATEST
Michael Jordan

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Michael Jordan

    Show all podcasts related to michael jordan

    Latest podcast episodes about Michael Jordan

    Pardon My Take
    Jameis Winston, The Boys Are Back In Studio, Kevin Durant's Alleged Burners, NBA All Star Game, Olympics And Vacation Recap

    Pardon My Take

    Play Episode Listen Later Feb 16, 2026 146:16


    The boys are back in studio from vacation and an NBA All Star Weekend that got dominated by another alleged Kevin Durant burner hitting the internet. We talk dunk contest, Chris Paul retiring and more (00:00:00-00:31:25). Olympic talk with a curling controversy and sports we dont understand plus some CBB talk (00:31:25-01:03:49). Who's back of the week including Anthony Kim, Michael Jordan winning the Daytona 500 (01:03:49-01:23:34). Jameis Winston joins the show to talk about his year with the Giants, the best TD of the season, the Rizzler, how much longer he wants to play and whats next for him (01:23:34-02:02:43). We finish with a little recap from everyone's vacation (02:02:43-02:24:05).You can find every episode of this show on Apple Podcasts, Spotify or Netflix. Prime Members can listen ad-free on Amazon Music. For more, visit barstool.link/pardon-my-take

    The Learning Leader Show With Ryan Hawk
    675: Tom Hardin (Tipper X) - The Largest Insider Trading Case, How Ambiguous Leadership Destroys Culture, Resume vs. Eulogy Virtues, Bad Decisions vs. Mistakes, and Building Psychological Safety

    The Learning Leader Show With Ryan Hawk

    Play Episode Listen Later Feb 16, 2026 54:50


    The Learning Leader Show with Ryan Hawk Go to www.LearningLeader.com This is brought to you by Insight Global. If you need to hire one person, hire a team of people, or transform your business through Talent or Technical Services, Insight Global's team of 30,000 people around the world has the hustle and grit to deliver. www.InsightGlobal.com/LearningLeader My guest: Tom Hardin was known as "Tipper X" during Operation Perfect Hedge, the largest insider trading investigation in history. After making four illegal trades based on inside information, the FBI approached him on a Manhattan street corner and convinced him to wear a wire over 40 times, helping build 20 of the 81 cases. Key Learnings  Ambiguity is where ethical lines blur. Tom's boss said, "Do whatever it takes," after the hedge fund lost money, and as a junior employee, Tom didn't ask clarifying questions. The undiscussable becomes undiscussable. Leaders give ambiguous messages, then pretend they weren't ambiguous, employees get confused and don't question the boss, and you end up with a culture of silence. Making decisions in isolation is dangerous. The information came to Tom and he didn't talk to his boss or his wife (who probably would've slapped him around for crossing ethical lines). Psychological safety requires muscle memory. You have to practice saying "I'm just going to ask some clarifying questions here" when your boss gives ambiguous orders. Bad decisions aren't mistakes. Mistakes are made without intent, but bad decisions are made with intent. Tom told himself for years he made "mistakes," but on a drive home from speaking at a keynote, he realized: "There's no way I made mistakes. I made bad decisions." Never say never. Tom argues you're more susceptible to falling down your own slippery slope when you think "that would never be me." 80% of employees can be swayed either way. 10% are morally incorruptible, 10% are a compliance nightmare, and 80% can be influenced by the culture around them. Tone at the top means nothing. Company culture isn't the tone at the top or glossy shareholder letters; it's the behaviors employees believe will be rewarded or put them ahead. Reward character, not just results. You can't just focus on short-term performance and dollar goals without understanding how the business was made and what was behind the performance. The question isn't "what?" but "how?" If you're just focused on the numbers and not on how you got there, you have the opportunity to end up in a slippery slope situation. Celebrate people who live your values. Companies that spend millions on trips for people who live out shared values (not financial performance) are putting their money where their mouth is. Leaders must share their own ethical dilemmas. We've all been in situations where we could go left or right, and sharing how you worked through those moments makes you more endearing and a better leader. Keep a rationalization journal. When Tom and his wife have big decisions (or even little things), he writes them down in a rationalization journal and reflects on them once a month. He's still susceptible to going down another slippery slope, so checking himself on those passing thoughts improves his character over time. It's not what you say, it's what you do. Just like kids see what parents do (not what they say), employees see what behaviors leaders actually reward. $46,000 cost him $23 million. A business school professor calculated Tom would've made $23 million if he'd stayed on the hedge fund path, but he made $46,000 on the four illegal trades before getting caught. His wife was his rock. 85% of marriages end when something like this happens, and she had every right to leave. They just got married, no kids yet. But she stayed. When Tom interviewed her for the book 20 years later, she said, "All I remember is you accepted responsibility immediately. You didn't make up excuses." Running pulled him out of a shame spiral. Tom got obese as a stay-at-home dad. His wife signed him up for a 5K race (and beat him while pushing a jogging stroller). Just crossing that finish line lit a fire. He ended up running a 100-mile race.  Doing hard things teaches you that you can do hard things. When Tom had to start a speaking business because they were running out of money, he said, "I can do this" because he'd already put his body through ultramarathons. No challenge is insurmountable. He ended up with something better. It's not about status or money anymore; it's about who he is with his family and his relationships now. Windshield mentality, not rearview mirror. Tom can't change the past, but he can look forward instead of backward. A lot of people in their twenties do stupid stuff (maybe not to this degree), but now, in his forties, he can learn from it. Why not embrace it rather than try to scrub it off the internet? Eulogy virtues versus resume virtues. In his twenties, Tom only thought about resume virtues (how much money, the next job, the next stepping stone) and never about eulogy virtues (what people will say about his character when it's all over). What will people say at your eulogy? Will they still be talking about those four trades, or will they talk about who you became after? More Learning #226 - Steve Wojciechowski: How to Win Every Day #281 - George Raveling: Wisdom from MLK Jr to Michael Jordan #637 - Tom Ryan: Chosen Suffering: Become Elite in Life & Leadership Reflection Questions Tom's boss gave him an ambiguous message ("do whatever it takes"), and as a junior employee, he didn't ask clarifying questions. Think about the last ambiguous instruction you received from leadership. Did you ask clarifying questions, or did you fill in the blanks yourself? What's stopping you from creating psychological safety to ask next time? Tom argues that 80% of employees can be swayed either way by culture. Look at your organization right now. What behaviors are actually being rewarded? If someone asked your team "what gets you ahead here?" what would they honestly say? Tom asks: "Will people be talking about the resume virtues (money, titles, achievements) or the eulogy virtues (character, relationships, who you were) when you're gone?" What's one eulogy virtue you need to start prioritizing today, even if it means slowing down on resume building?

    The Teardown
    Rarefied Airness

    The Teardown

    Play Episode Listen Later Feb 16, 2026 61:53


    The Great American Race is officially in the books, and Tyler Reddick broke through to capture the biggest win in his career. Our resident reporters Jeff Gluck and Jordan Bianchi were on the scene in Daytona, and they checked into the Teardown to discuss what was a relatively controversy-free race, what to do about fuel saving and how the points are looking after race one. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    SpeedFreaks: A National Radio Show
    Michael Jordan Becomes Daytona 500 Champion, Full Freaks Recap

    SpeedFreaks: A National Radio Show

    Play Episode Listen Later Feb 16, 2026 120:00


    Its full FREAK ahead, as the 2026 motorsports season officially kicked off with what will be one of the most talked about Daytona 500s in NASCAR history. What's next for America's auto racing titan with NBA G.O.A.T Michael Jordan adding "Daytona 500 Champion" to his outstanding resume, thanks to elite driving by Tyler Reddick? Did "The Great American Race" live up to its billing, or did fuel saving put a damper on NASCAR's Super Bowl? Daytona winners Austin Hill, Denny Hamlin and Reddick join the show, and SpeedFreaks gets the exclusive scoop on if it was Alex Bowman caught on camera busting his behind on a snowy fall this offseason.

    Michael Phelps - Audio Biography
    Michael Phelps' Legacy Still Echoes as 2026 Olympics Takes Center Stage

    Michael Phelps - Audio Biography

    Play Episode Listen Later Feb 14, 2026 1:59 Transcription Available


    Michael Phelps BioSnap a weekly updated Biography.Michael Phelps, the swimming legend with 28 Olympic medals, has stayed largely out of the spotlight in the past few days amid the buzz of the 2026 Winter Olympics in Milan, but a few ripples from his enduring fame surfaced. Sports Business Journal reports that former NASCAR Commissioner Steve Phelps stepped down following a messy antitrust trial settlement with teams like Michael Jordan's 23XI Racing, marking a pivotal shift for the sport as it kicks off its drama-free 2026 season at Daytona this weekend—though our Phelps was never directly involved, the name overlap sparked brief social chatter. Over at USA Hockey's Olympic insider updates from Milano Cortina, one player fondly recalled Phelps' epic 2008 Beijing dominance as their favorite childhood Olympic moment, a nostalgic nod amid Team USA's hockey prep against Latvia. Meanwhile, Swimming World Magazine dished a Throwback Thursday feature on February 5 spotlighting Phelps' unforgettable 41 days mastering the 200 individual medley, reigniting fan nostalgia just before the Games hype peaked. No fresh public appearances, business deals, or social media posts from Phelps himself popped up—verified sources like AOL's coverage of his fiery January Instagram critique of USA Swimming leadership, blasting their post-Paris Olympics medal slump and vowing to help fix it, remain the hottest recent drama without updates. Gossip mills stayed quiet on family scoops or golf swings, with his Ping endorsement and poker flings feeling like ancient history. Phelps seems content cheering from afar, his legacy still fueling headlines without stealing the Olympic thunder.Get the best deals https://amzn.to/3ODvOtaThis content was created in partnership and with the help of Artificial Intelligence AI

    The Rich Eisen Show
    Hour 2: Hall of Famer Chris Webber Talks NBA, plus LeBron's Lakers Future

    The Rich Eisen Show

    Play Episode Listen Later Feb 12, 2026 46:23


    Basketball Hall of Famer Chris Webber tells Rich his favorite moments from this first NBA All-Star Game in 1997 including meeting Celtics legend Bill Walton, why he never shied away from talking trash with Michael Jordan, reveals which players in today's game he most loves watching, and says how coaching has evolved to take more advantage of the growing skillsets of big men like Nikola Jokic and Giannis Antetokounmpo.   Rich weighs in on LeBron James' uncertain future with the Los Angeles Lakers beyond this season, and gets into a heated debate with Brockman: is Staten Island more a part of New York City than Maine is a part of New England? Rich and Brockman look back at Patriots QB Drake Maye's season and what the MVP runner-up can learn from his Super Bowl LX loss. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Sports Media with Richard Deitsch
    First Look: A new doc shows the Julius Erving you never saw — when he was like Michael Jordan every night

    Sports Media with Richard Deitsch

    Play Episode Listen Later Feb 12, 2026 7:34


    Here's a First Look from our upcoming podcast with Kenan Kamwana Holley, an Emmy Award documentarian who directed "Soul Power: The Legend of the American Basketball Association," a four-part docu-series that will debut on Feb. 12 on Prime Video.  The full podcast will be out later this week. In this preview clip, Holley discusses his interviews with Julius Erving and discovering ABA footage showing Dr. J at the height of his prime. You can subscribe to this podcast on Apple Podcasts, Spotify and more.  To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

    The Marc Cox Morning Show
    Kim on a Whim — The Jock Tax Trap and St. Louis's Self-Inflicted Wounds

    The Marc Cox Morning Show

    Play Episode Listen Later Feb 12, 2026 10:33


    Kim dives into the absurdity of California's “jock tax,” revealing how Sam Darnold actually lost money playing in the Super Bowl after being taxed for simply spending a week in the state. The team unpacks how 21 states now squeeze visiting athletes, with California leading the pack and Illinois famously responding with “Michael Jordan's Revenge.” The discussion shifts to St. Louis's own 1% earnings tax and the city's decades-long decline under one-party rule. Marc calls out leadership for driving away businesses with high taxes and unchecked crime, arguing that until the city changes course, companies and workers will keep fleeing to the suburbs. Hashtags: #KimOnAWhim #MarcCoxShow #JockTax #CaliforniaTaxes #StLouis #EarningsTax #BusinessFlight #TaxPolicy #DowntownDecline

    The Hoop Genius Podcast
    S6 Ep35: Fighting, Tanking & The NBA's All-Star Problem

    The Hoop Genius Podcast

    Play Episode Listen Later Feb 12, 2026 47:11


    This episode covers everything shaping the NBA right now: All-Star controversy, tanking, league expansion, buyout chaos, and the culture divide between old school intensity and the modern product.Mo Mooncey and Coach Brendan Suhr open with an all-time All-Star draft featuring legends like Michael Jordan, Stephen Curry, LeBron James, Kobe Bryant and Shaquille O'Neal - then use it as a launch point to examine a bigger question:Has the NBA All-Star Game lost its edge?From declining competitiveness to load management, influencer marketing, and Gen Z viewing habits, the discussion digs into why All-Star Weekend feels different - and what could realistically fix it.The episode also breaks down: The Pistons vs Hornets fight and what it says about NBA culture Why tanking  is damaging competitive integrity Early season load management and its impact on fans The buyout market and the career risk for young players Expansion to Seattle and Las Vegas Whether World vs USA is the only format that can restore All-Star intensity Why the modern NBA product is struggling to hold younger audiences Coach Suhr brings perspective from nearly five decades inside the league, including coaching All-Star Games and competing against the 90s era legends. The contrast between past and present is clear — and uncomfortable.If you care about the future of the NBA, this is the conversation.

    Awful Announcing Podcast
    Alex Weaver on state of NASCAR, Daytona 500, Michael Jordan, and more

    Awful Announcing Podcast

    Play Episode Listen Later Feb 12, 2026 41:46


    Host Brandon Contes interviews Fox Sports NASCAR reporter Alex Weaver. Brandon and Alex discuss a wide range of topics including this year's Daytona 500 storylines, the aftermath of the 23XI Racing/Front Row Motorsports lawsuit, the return of The Chase championship format, and more.-2:13: Daytona 500-11:49: Getting into NASCAR/sports media career-18:31: State of NASCAR-24:54: Cook Out Clash at Bowman Gray-27:39: Michael Jordan and 23XI Racing post-lawsuit-31:58: Chase format coming back-34:14: Biffle/Hamlin family tragedies-37:38: Interviewing driversDownload the Awful Announcing Podcast:Listen on AppleListen on SpotifyAwful Announcing on XAwful Announcing on FacebookAwful Announcing on InstagramAwful Announcing on ThreadsAwful Announcing on BlueSkyAwful Announcing on LinkedInAwful Announcing on YouTube Hosted on Acast. See acast.com/privacy for more information.

    I'm a Podstar not a Doctor
    Hustle Real Hard

    I'm a Podstar not a Doctor

    Play Episode Listen Later Feb 11, 2026 46:58


    This week we talk to Earl Cooper of East Side Golf about his journey to success. The takeaways include the themes of perseverance and manifesting dreams. Earl Cooper discusses the expansion of his brand, Eastside Golf, and his love for the golf lifestyle. He shares his journey from Atlanta to New York and LA, his experiences playing at exclusive golf clubs, and his pinch-me moments meeting Michael Jordan and achieving significant milestones. Earl also emphasizes the importance of authenticity and building the brand, as well as the expansion of the golf market and hip-hop collaborations. Throughout the conversation, he reflects on success and humility, highlighting the impact of family and legacy.

    Fuera Del Control
    Ep. 332.- ¿Se vienen buenos juegos este año?

    Fuera Del Control

    Play Episode Listen Later Feb 11, 2026 61:31


    Un poco tarde, pero ahora si nos sentamos a platicar sobre lo que sucedió en el nintendo direct, probamos algunos juegos de los que salieron, nos acordamos de Michael Jackson, incluso hasta de Michael Jordan, y los juegos malos de deportes.Support this show http://supporter.acast.com/fuera-del-control. Hosted on Acast. See acast.com/privacy for more information.

    Bob Sirott
    This Week in Chicago History: Lou Malnati's, Michael Jordan, and the Chicago Auto Show

    Bob Sirott

    Play Episode Listen Later Feb 11, 2026


    Anna Davlantes, WGN Radio's investigative correspondent, joins Bob Sirott to share what happened this week in Chicago history. Stories include the St. Valentine’s Day Massacre, Michael Jordan’s MVP title, Lou Malnati’s heart-shaped pizza, and more.

    Dis Dat with My Cousin Vlad
    Episode 281: Wet'n'Wild WhatsApp

    Dis Dat with My Cousin Vlad

    Play Episode Listen Later Feb 11, 2026 69:54


    Vlad reads out about Persistence, gets his phone ransacked and audited by his Mrs while he is on a slide at Wet'n'Wild, talks to CK & AC live on air, rants on being completely distracted from your family by things out of your control and the Top Mobile Phones of the 90s/00sDNA DISTILLERY (AWARD WINNING RAKIJA)Award winning Rakija company with immaculate celebratory beverages. Check out the entire range on the below websites, order a tasting pack or some of their flagship, amazing rakija today!https://www.dnadistillery.comCARDSTRIKE! Amazing Basketball cards, Michael Jordan memorabilia and everything collectable sports card buying and selling!!!https://www.cardstrike.com.auROYAL STACKS! (IMMACULATE BURGERS)Melbournes Greatest Burgers! Royal Stacks is a booming burger chain in Victoria with classic burgers, shakes and more, with a 90s vibe and high quality food! https://www.royalstacks.com.auMETROPOLITAN STONE (Kitchens, Cabinets, Laundry, All Cabinets)We have a combined 30 years experience in the cabinet making industry in Victoria! Everything from small projects to large projects!Benchtop change overs, Kitchen facilities, Kitchens, Laundries, Bathroom cabinets, T.V units, Wardrobes etc!MENTION: VLADContact: MATT 0425797488Matthew@metropolitanstone.com.auhttp://www.metropolitanstone.com.auORANGE LEGAL GROUP (Specialising in Property law for purchasing and selling, conveyancing, in-house Mortgage broker & Chartered Account! One stop shop for ALL property needs! Wrap! FREE Contract reviews for buyers before purchasing property!Mention VLAD!https://www.orangelegalgroup.com.auEmail: property@orangelegalgroup.com.auContact: mycousinvlad@gmail.comhttp://www.instagram.com/mycousinvladSend Vlad a Text MessageSupport the showBE GOODDO GOODGET GOOD

    Motivation Daily by Motiversity
    THE ART OF FAILING - The Most Powerful Motivational Speeches for Success, Athletes & Working Out

    Motivation Daily by Motiversity

    Play Episode Listen Later Feb 10, 2026 24:49


    THE ART OF FAILING! The best in the world know they will fail again and again, but they have learned how to deal with it. Best Motivational Speeches from Motiversity, featuring speeches from Michael Jordan, Kobe Bryant, Lebron James, Tom Brady, Mike Tyson and more.Special thanks to our partners:Chris Williamson: https://www.youtube.com/@ChrisWillxPatrick Bet-David of Valuetainment: https://www.youtube.com/@VALUETAINMENTLewis Howes: https://www.youtube.com/@lewishowesTom Bilyeu: https://www.youtube.com/@TomBilyeuDOAC: https://www.youtube.com/@TheDiaryOfACEOSpeakerMichael JordanKobe BryantKevin HartChris SenegalDan MillmanPeter DiamandisTony RobbinsDa RulkMichael JordanMike TysonSerena WilliamsRoger FedererChris Williamsonhttps://www.youtube.com/@ChrisWillxLeBron Jameshttps://www.instagram.com/kingjames/Walter BondYouTube: http://bit.ly/WalterBondMotivationEric Thomashttps://www.youtube.com/user/etthehiphoppreacherPatrick Bet-Davidhttps://www.youtube.com/@VALUETAINMENTTom Bradyhttps://www.instagram.com/tombrady/Tim GroverMorgan HouselChris Bumsteadhttps://www.instagram.com/cbum/?Frank BrunoGreg Plitthttps://www.instagram.com/gregplitt/?hl=enAlex Hormozihttps://www.instagram.com/hormozi/?hl=enMichael Jordanhttps://www.instagram.com/jumpman23/Ryan HolidayStephen CurryCoach PainYouTube: http://bit.ly/2LmRyeaInstagram: http://bit.ly/2XLcLW5Facebook: http://bit.ly/32tZdNiMarcus “Elevation” TaylorYouTube: https://bit.ly/MarcusATaylorChannelConor McGregrorTiger WoodsMuhammad AliVenus WilliamsFloyd MayweatherBabe RuthGreg Plitt David GogginsFacebook: https://www.facebook.com/iamdavidgoggins/Instagram: https://www.instagram.com/davidgoggins/Website: http://www.davidgoggins.com/Rob DialBrandon LakeNicole Lynn (via Lewis Howes)Joe Roganhttps://open.spotify.com/show/4rOoJ6Egrf8K2IrywzwOMkGorilla NemsJocko Willink (via Lewis Howes)YouTube: http://bit.ly/2v5XxuKInstagram: http://bit.ly/2M7oLdwFacebook: http://bit.ly/2JVVaRxJordan Petersonhttps://www.youtube.com/channel/UCL_f53ZEJxp8TtlOkHwMV9Qhttps://www.jordanbpeterson.com/William Hollis:YouTube: http://bit.ly/WillHollisYouTubeInstagram: https://www.instagram.com/williamkinghollis/Facebook: http://bit.ly/2LNZtgAWebsite: https://williamhollismotivation.com/Jeremiah Joneshttps://www.instagram.com/jeremiahjonesfitness/Cru Mahoney: https://www.instagram.com/crumahoney/Music:Secession Studioshttps://www.youtube.com/user/thesecessionRok Nardin - HeroesRok Nardin - Persistencehttps://www.youtube.com/channel/UCs4fABLb5luHCojPUgg8AiAAudiojungleEpidemic SoundDreamscapehttps://www.youtube.com/watch?v=LlN8MPS7KQsConfidential Music @ConfidentialMusicofficial Hosted on Acast. See acast.com/privacy for more information.

    Bedtime History: Inspirational Stories for Kids and Families

    The Chicago Bulls are a professional basketball team based in Chicago, Illinois. They became one of the most famous teams in the world during the 1990s, led by superstar player Michael Jordan. The Bulls won six championships in eight years, exciting fans with fast play, teamwork, and powerful defense. Their success helped make basketball more popular around the globe. The team's red and black colors and bull logo became symbols of excellence and determination.

    The Learning Leader Show With Ryan Hawk
    674: PJ Fleck - Building Elite Culture, Nekton Mindset, Selecting >Recruiting, Intrinsic Motivation, Row The Boat, and Transformational Coaching

    The Learning Leader Show With Ryan Hawk

    Play Episode Listen Later Feb 9, 2026 62:22


    Go to www.LearningLeader.com This is brought to you by Insight Global. If you need to hire one person, hire a team of people, or transform your business through Talent or Technical Services, Insight Global's team of 30,000 people around the world has the hustle and grit to deliver. www.InsightGlobal.com/LearningLeader My guest: PJ Fleck is the head football coach at the University of Minnesota. Before that, he transformed Western Michigan from one win to 13 wins and a Cotton Bowl appearance. Before his coaching days, PJ was a stud receiver at Northern Illinois and was a guy I played against in college. Coach Fleck has built one of college football's most distinctive culture-driven programs. You'll hear why he maintains an 80-20 split favoring high school recruiting over the transfer portal, how he runs practice with a 32-second clock to make it harder than games, and why he sees himself as a cultural driver rather than a motivational coach. This is a conversation recorded with all of our coaches inside "The Arena." That is our mastermind group for coaches in all sports. And it did not disappoint. Notes: Stop recruiting, start selecting. PJ doesn't chase the highest-rated players... He looks for fit and alignment with his values. Ask yourself: Are you trying to convince people to join your team, or are you selecting people who already want what you're building? Efficiency beats duration. PJ runs 95-minute practices with a 32-second play clock, always moving, always intense. The principle: Make practice harder than the game. Where in your work are you confusing time spent with intensity and focus? Internal drive trumps external motivation. PJ calls his ideal players "Nektons," always attacking, never satisfied. He's looking for people who prove their worth to themselves, not to others. If you need constant external motivation, you're not ready for elite teams. A leader must teach and demand. A team member must prepare and perform. These aren't opposing forces—they're two sides of the same commitment to excellence. My junior year at Ohio University. I was the quarterback of the Ohio football team. We lost to No. 17 Northern Illinois 30-23 in overtime on a Saturday night. P.J. Fleck caught the game-tying 15-yard touchdown pass late in the fourth quarter. PJ finished with 14 catches for 235 yards and a touchdown. (I threw a 30-yard TD pass to Anthony Hackett to put us up a TD right before halftime). Let your team see you played. They do"Guess that Gopher" before team meetings, where players guess which coach's highlights they're watching. Give them a peek behind the curtain. It builds credibility and connection. PJ honors his mentor, Jim Tressel, by wearing a tie while coaching. Who are you honoring through your daily practices? Keep your door open. PJ has no secretary. Players can walk into his office at any moment. Create fluidity between you and your team. Transparency after tragedy is a choice. When PJ's son died from a heart condition, he had two options: never talk about it again, or let it shape him. He chose radical transparency, knowing it would get scrutinized. That's where "Row the Boat" comes from. A losing season reveals what you actually need. After going 1-11 at Western Michigan while also getting divorced, PJ says every coach should experience a losing season. It forces you to identify what you actually need versus what you don't need. Choose what scares you. When deciding on Minnesota, Heather asked him, "Does this scare you?" He said, "Hell yeah, it scares me." His response: "Well then, that's where we're going." Life versus living. Living is the salary and contract. Life is about moments and memory. If you can't stay in the moment and reflect on great moments or hard moments, life will be like mashed potatoes to you. Your expectations should match your resources. The gap between expectations and resources is called frustration. The bigger the gap, the more frustration from everyone around you. Maintain an 80/20 model if you can. 80% high school players, 20% transfer portal. PJ has one of the highest retention rates in the country because of selection and fit, not recruiting. "It's not about the money until it's about the money." The kids' PJ gets value for other things before the money talk. They enjoy the experience of being a college athlete. PJ leads with "I'm really difficult to play for." PJ's opening line to recruits. He asks for a lot. This makes people who are lazy, complacent, or fraudulent run like hell. "This is going to expose me." Start with good people, not good players. Out of 500 kids, who are the best 25 young men? PJ doesn't get five stars. He gets two and three stars who believe they can be five stars. A chip versus a crack on your shoulder. Once you do something the media says you couldn't do, they'll set a new bar. All PJ wants is kids who want to prove to themselves that they can do what people say they couldn't. You don't need PJ's personality. You need the internal drive to be the best version of yourself. That's what he's selecting for. "I'm not a motivational coach. I'm a cultural driver." PJ picks their "how." He picks their journey. If someone needs constant motivation, they're not ready. Peel back the Instagram filter. Everything you see on social media is filtered. You have to dig deeper with this generation to find out who they really are. Hire former players back. PJ's staff has more former players who played for him than ever before. They cut their teeth in the building. In this transactional era, former players help you stay transformational. The HYPRR System. This is PJ's hyperculture framework he created after going 1-11: H (How): The people. Nektons who always attack. How you do one thing is how you do everything. Consistency matters. Y (Yours): Your vision. It's YOUR life, not anyone else's vision. Players are the builders. Don't tell me you want an extravagant home and then hire bad builders. P (Process): The work. The who, what, when, where, and why. Anyone should be able to ask those questions at any point. R (Result): Focus on the HYP. It's not the officials' fault. It's not the other team's fault. R (Response): How will you respond to the result? Don't believe the hype. Everything about hype is before the result happens. Focus on How, Yours, and Process instead. Someone will take what you were taught was horrible and create a business model. PJ uses Uber and Airbnb as examples. We were taught "stranger danger" as kids. Now we get in cars with strangers while drunk and sleep in their homes. The right people plugged into crazy visions can change everything. Define success as peace of mind. That's how PJ's program defines success. Not wins and losses. Train body language. "Big chest" means standing up straight. Players are not allowed to put their hands on their knees or their heads. If you can't hold yourself up, trainers need to check on you. Teach response, not reaction. You can have emotions, but train to not be emotional. The real world wants to see you react. Train to respond properly in every situation. Your words have power. PJ's players know the definitions of 150 words that will help them for the rest of their lives. Give substance to the filters. That's your job as an educator. Cut all the fat off practice. PJ was from the era of 3.5-hour practices. He has ADD and needs to move. He got bored as a player, so he vowed to run practice differently. Run a 32-second play clock constantly. Every 32 seconds, you run a play. You are always under the two-minute warning in practice. This trains your team to operate under pressure. Never practice longer than 95 minutes. It's one thing to watch as a recruit. It's another to experience it as a player. Kids puke during dynamic warmup in the first week because it's that intense. Make practice harder than the game. The game will eventually slow down for your players if practice is legitimately harder. Nektons flow through water currents without being affected. Don't let circumstances dictate behavior. Train this mindset daily. The biggest jump in sports is from high school to college. 17-year-olds playing against 24-year-olds. It's not just talent. It's experience, development, strength, and confidence all at once. Never let any environment be too big for your coaches. Train your staff to be comfortable in all situations, not just your players. Always be learning outside your field. PJ attends leadership seminars with SEALs and Green Berets. At one dinner, a retired military officer who looked like Sean Connery scanned the room quietly, then said: "I'm taking in all the good in the room. I'm also coming up with a plan to kill every one of you, in case I need to." He never came back to the table because he got called to active duty and left for Afghanistan. Always be ready. That's what makes you special. Watch to learn. PJ watched "Landman" and took notes on how to run the next team meeting. His wife hates that he can never relax. Find teaching and education in everything you do. When you stop, you stop growing. Get better at celebrating. PJ has a great bourbon and champagne collection. He celebrates more than he ever has. Balance the intensity with moments of joy. Make transformational programs real. Gopher for Life program. Monthly educational courses. Monthly date nights where players bring their dates and learn dinner etiquette. Monthly racial education class. Weekly coach development on Thursdays, where coaches speak on any topic to advance their careers. Don't let important things stop when the news cycle moves on. COVID and racism got put in the same bracket. When COVID stopped, racism education stopped everywhere. Not at Minnesota. Keep going. Bring back the fun. After wins, players can't wait to pick the design for the next team shirt. PJ gives them five options, and they get into it. People are losing the fun connection that made elementary school great. A coach's job is to teach and demand. A player's job is to prepare and perform. If you're a coach, you better be teaching things: life, sport, relationships. Elite teams are led by players. Your job is to get as many elite people to the front of the bus as possible. More Learning #226 - Steve Wojciechowski: How to Win Every Day #281 - George Raveling: Wisdom from MLK Jr to Michael Jordan #637 - Tom Ryan: Chosen Suffering: Become Elite in Life & Leadership  

    The Retrospectors
    Magic Johnson: Hoops and Hope

    The Retrospectors

    Play Episode Listen Later Feb 9, 2026 12:20


    Just three months after Magic Johnson retired from basketball due to his HIV diagnosis, he made a triumphant return on 9th February, 1992 - at the NBA All-Star Game in Orlando, Florida. The sports world was divided—some players, like Michael Jordan, welcomed him back, while others, like Karl Malone, were hesitant, voicing concerns about physical contact on the court. But when Johnson stepped out, fans and fellow players alike cheered him on, and Johnson racked up 25 points, dished out nine assists, and lead the West to a dominant 153-113 victory over the East, becoming named Most Valuable Player. In this episode, Arion, Rebecca and Olly discover how Johnson became the face of basketball's golden era; explain why misunderstandings and ignorance about HIV was so widespread; and uncover the career Johnson built beyond basketball... Further Reading: • ‘Magic Johnson returns for All‑Star Game | February 9, 1992' (HISTORY, 2024): https://www.history.com/this-day-in-history/magic-johnson-returns-for-all-star-game • ‘Magic Johnson Talks About How He 'Needed' His Historic 1992 All-Star Game' (UpRoxx, 2016): https://uproxx.com/dimemag/magic-johnson-1992-all-star-game-hiv/ • The Announcement: Magic Johnson (NBA, 2016): https://www.youtube.com/watch?v=xMMWLS8D4OU Love the show? Support us!  Join 

    The Neurodivergent Experience
    Mindful Mondays With Ashley Dupuy: Thoughts Are Not Facts | Growth Mindset for Neurodivergent Minds

    The Neurodivergent Experience

    Play Episode Listen Later Feb 9, 2026 38:48


    Seeing your life clearly doesn't mean seeing it harshly.In this episode of Mindful Mondays, we explore how mindset and reframing shape not just how we think - but how our nervous system experiences the world.Many neurodivergent and highly sensitive people live with a loud inner commentary. Thoughts can feel convincing, critical, and fixed - yet thoughts are not facts.Together, we explore:* Growth mindset through a neurodivergent lens* Why reframing supports nervous system safety (not toxic positivity)* How meaning - not circumstances - shapes our experience* Why challenges often deepen, rather than diminish, a meaningful lifeDrawing on wisdom from thinkers and creatives including William James, Hugh Mackay, Tina Turner, Joan Rivers, Kurt Vonnegut, and Michael Jordan, this episode invites a gentler, truer way of seeing yourself.You'll also be guided through a reflective visualisation - The Gallery of Your Life - offering a new relationship with past moments, old judgments, and the stories you live inside.This is not about fixing yourself.It's about learning to see yourself in a way that supports you.Our Sponsors:

    The Captain w/ Vershan Jackson – 93.7 The Ticket KNTK
    Is LeBron in his Wizards Michael Jordan Years?: February 9th, 2:45pm

    The Captain w/ Vershan Jackson – 93.7 The Ticket KNTK

    Play Episode Listen Later Feb 9, 2026 9:24


    Is LeBron in his Wizards Michael Jordan Years?Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

    Black Men Sundays
    Why Most Businesses Fail at Marketing—and How to Fix It

    Black Men Sundays

    Play Episode Listen Later Feb 8, 2026 42:45


    This episode of Black Men Sundays is a no-nonsense marketing masterclass with John Dwyer (JD), founder of the Institute of WOW and the strategist behind campaigns for Michael Jordan, Disney, KFC, Warner Bros., and Jerry Seinfeld.JD breaks down why most businesses fail at marketing, why discounting is a losing game, and how direct-response and incentive-based strategies can generate massive, immediate demand. From global campaigns to simple ideas that produce avalanche-level leads, this conversation is a blueprint for entrepreneurs who want customers now—not excuses.Tune in now! Black Men Sundays is ranked #9 of the top 80 Black Wealth Podcasts on https://podcasts.feedspot.com/black_wealth_and_investing_podcasts/Subscribe on our Youtube channel! https://www.youtube.com/@blackmensundaysFollow us on Instagram and Tiktok @blackmensundays

    The Sideline Live Podcast
    #213 Vern Gambetta // Coaching the best to get better

    The Sideline Live Podcast

    Play Episode Listen Later Feb 8, 2026 98:12


    On episode 213 I am delighted to be joined by athletic development pioneer Vern Gambetta. He is widely regarded as the founding father of functional sports training. He is a lecturer, coach, speaker, author. He is the founder of GAIN, a leader in multi-sport coaches education.  Take a look at his full bio below

    The Thing Is...
    470: Just Like A Basketball Player (Joanna Angel)

    The Thing Is...

    Play Episode Listen Later Feb 7, 2026 74:32


    A late Figs and later Joanna Angel join Shannon for a fun show! We get Joanna's reaction to Gay Blind Mike claiming Natalie is not good at oral pleasure, an overview of what dating is like for a recently divorced adult film star, a bootycall fit for the Michael Jordan of sex, we watch Luis J Gomez get into a fight in Austin, and so much more!Air Date 2.3.26Support our sponsorshttps://bodybraincoffee.com - use the code DING20 to get 20% off!https://yokratom.com/ - Home of the $60 Kilo*Send in your stories for Bad Dates, Bad Things, and Scary Things to...* thethingispodcast@gmail.com The Thing Is...Podcast Merch available athttps://gasdigitalmerch.com/collections/the-thing-isThe Thing Is... Airs every Tuesday, at 5:30pm ET on the GaS Digital Network! The newest 20 episodes are always free, but if you want access to all the archives, watch live, chat live, access to the forums, and get the show five days before it comes out everywhere else - you can subscribe now at gasdigital.com and use the code TTI to get a one week free trial.Follow the show on social media! Joanna Angel - Instagram https://www.instagram.com/joannaangel/Mike Figs - Instagram: @comicmikefigsShannon Lee - Instagram: @shannonlee6982 Shannon's Amazon Wishlisthttps://www.amazon.com/hz/wishlist/ls/3Q05PR2JFBE6T?ref_=wl_shareTo advertise your product on GaS Digital podcasts please email jimmy@gasdigitalmarketing.com with a brief description about your product and any shows you may be interested in advertising onSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Club Capital Leadership Podcast
    Episode 538: Playing To Win

    Club Capital Leadership Podcast

    Play Episode Listen Later Feb 6, 2026 11:40


    Bradley Hamner breaks down a powerful lesson from Michael Jordan's "The Last Dance" documentary about the fundamental difference between playing to win versus playing not to lose—and how this mindset shift can transform your business leadership.Register Now! Lead Yourself First: February 24th - an Above The Business WorkshopFREE workshop for business owners who planned well but are running on fumes.You set the goals in January. You aligned the team. You built the plan.But six weeks in, you're exhausted.Your best hours are going to fires. Decision fatigue is real. And your team senses it.Here's the truth: Your business won't grow beyond you until you lead yourself first.Join Bradley for a FREE 2-hour workshop on February 24th at 10 AM CST.You'll build your Personal Operating System, Decision Framework, Energy Protection Plan, Role Clarity Matrix, and 90-Day Accountability Structure.Space is limited. https://blueprintos.com/assetsThanks to our sponsors...Coach P found great success as an insurance agent and agency owner. He leads a large, stable team of professionals who are at the top of their game year after year. Now he shares the systems, processes, delegation, and specialization he developed along the way. Gain access to weekly training calls and mentoring at www.coachpconsulting.com. Be sure to mention the Above The Business Podcast when you get in touch.Autopilot Recruiting helps small business owners solve their staffing challenges by taking the stress out of hiring. Their dedicated recruiters work on your behalf every single business day - optimizing your applicant tracking system, posting job listings, and sourcing candidates through social media and local communities. With their continuous, hands-off recruiting approach, you can save time, reduce hiring costs, and receive pre-screened candidates, all without paying any hiring fees or commissions. More money & more freedom: that's what Autopilot Recruiting help business owners achieve. Visit https://www.autopilotrecruiting.com/ and don't forget to mention you heard about us on the Above The Business podcast.Direct Clicks is built is by business owners, for business owners. They specialize in custom marketing solutions that deliver real results. From paid search campaigns to SEO and social media management, they provide the comprehensive digital marketing your business needs to grow. Here's an exclusive offer for Above The Business listeners: Visit directclicksinc.com/abovethebusiness for a FREE marketing campaign audit. They'll assess your website, social media, SEO, content, and paid advertising, then provide actionable recommendations. Plus, when you choose to partner with them, they'll waive all setup fees.About Above The Business:Above The Business is hosted by Bradley Hamner, founder of BlueprintOS, and focuses on helping small business owners transition from Rainmaker to Architect. Each week, Bradley shares frameworks, interviews successful entrepreneurs, and provides actionable insights for building businesses that run without you. Whether you're doing $300K or $3M in revenue, this show will help you get above your business and design the systems you need to scale.

    Daily Motivations
    Built From Loss

    Daily Motivations

    Play Episode Listen Later Feb 6, 2026 24:43


    The best in the world know they will fail again and again, but they have learned how to deal with it. Best Motivational Speeches from Motiversity, featuring speeches from Michael Jordan, Kobe Bryant, Lebron James, Tom Brady, Mike Tyson, and more.►SpeakerMichael JordanKobe BryantKevin HartChris SenegalDan MillmanPeter DiamandisTony RobbinsDa RulkMichael JordanMike TysonSerena WilliamsRoger FedererChris WilliamsonConor McGregorTiger WoodsMuhammad AliVenus WilliamsFloyd MayweatherBabe RuthGreg Plitt David GogginsInstagram - @daily_motivationsorgFacebook- @daily_motivationsorg

    Work On Your Game: Discipline, Confidence & Mental Toughness For Sports, Business & Life | Mental Health & Mindset

    Every high performer has a social self and a predator self, and in this episode, I use LeBron James and Michael Jordan as examples to explain the difference. My social self focuses on harmony, approval, and how people see me, while my predator self is locked in on results, execution, and the scoreboard. The tension between these two sides determines how far I can really go. I break down why leaning too much into one identity can limit performance. The goal is knowing when to lead with presence and when to move like a predator focused only on outcomes. Show Notes: [04:22]#1 Are you optimizing approval or optimizing outcomes?  [12:47]#2 The social self seeks consensus, while the predator enforces direction.  [20:09]#3 The social self follows a narrative, where the predator follows the moment.  [23:27]#4 Social self is about relationships. The Predator is about standards. Which one matters more to you? [26:31]#5  The predator is option independent. The social person hedges their bets.  [34:12] Recap Next Steps: 2449: "Group Decision" Is An Oxymoron 1485: "Controlling The Narrative" is For The Losers --- Power Presence is not taught. It is enforced. If you are operating in environments where hesitation costs money, authority, or leverage, the Power Presence Mastermind exists as a controlled setting for discipline, execution, and consequence-based decision-making. Details live here: http://PowerPresenceProtocol.com/Mastermind  This Masterclass is the public record of standards. Private enforcement happens elsewhere. All episodes and the complete archive: → WorkOnYourGamePodcast.com 

    Today in PA | A PennLive daily news briefing with Julia Hatmaker

    A new report has found how much Pennsylvanians need to make in order to “comfortably afford” child care. Another Starbucks has unionized. There's an “arctic blast” heading our way. Finally, the tallest living female dog — who's nearly half the size of Michael Jordan — has Pittsburgh roots. 

    Teton Sports Talk

    All NFL Coaching vacancies filled ✅ Pro Bowl was a failure ✅ Tyler Shough is elite ✅ Well well well, it's that time again. Time for football to end. Sucks to suck. We do a deep dive into the stats, facts, jokes, and rumors ahead of the Pats and Seahawks matchup in Santa Clara. Do we think it's a Seahawks blowout? No. Does Boston need another title? Hell no. We're split down the middle on who wins BUT agree, it's going to be no more than 7 points either way. Oh, what's that? James Harden changed teams?? Cool story. We mix in a little NBA, including the trade deadline movers and shakers. The Bucks keep Giannis because they're scared. The Jazz build for the future with JJJ. And Anthony Davis' time as a Mav is done and dusted. Thanks Nico! Will the French go le crazy for American Football? Is Teton Valley the new Winter Olympic talent hotbed?? Is Cooper Flagg better than Michael Jordan??? And would you inject a needle in your peepee for a chance at Olympic fame???? Download and subscribe, rate and review. Tune in Fridays at 2 PM Mountain Time, only on 89.1 KHOL.

    Verdict with Ted Cruz
    BONUS: Daily Review with Clay and Buck - Feb 3 2026

    Verdict with Ted Cruz

    Play Episode Listen Later Feb 4, 2026 62:07 Transcription Available


    Meet my friends, Clay Travis and Buck Sexton! If you love Verdict, the Clay Travis and Buck Sexton Show might also be in your audio wheelhouse. Politics, news analysis, and some pop culture and comedy thrown in too. Here’s a sample episode recapping four takeaways. Give the guys a listen and then follow and subscribe wherever you get your podcasts. Buck's NASA Visit Buck Sexton shares firsthand insights from his visit to NASA and Blue Origin, transitioning the discussion into national security, defense manufacturing, and the future of American military power. He describes what he calls a renaissance in U.S. defense and aerospace innovation, emphasizing the growing importance of advanced manufacturing, artificial intelligence, drone warfare, hypersonic weapons, and rapid production capabilities. Buck explains that modern warfare increasingly depends on technological superiority and scale, warning that the ability to manufacture advanced systems quickly may determine future conflicts more than traditional troop strength. Clay and Buck also discuss how Silicon Valley’s relationship with the U.S. military has evolved, crediting the Trump administration with pushing major technology companies to reengage with national defense efforts. They highlight concerns about China’s manufacturing capacity and argue that American tech companies have a responsibility to support U.S. national security. The hosts draw historical parallels to World War II–era industrial mobilization, suggesting that today’s defense challenges require similar cooperation between private industry and government. The final segment of Hour 1 explores the rapid commercialization of space and the growing influence of companies like SpaceX and Blue Origin. Buck Sexton describes space exploration as entering a new era driven by private enterprise, faster launch capabilities, and long‑term ambitions such as low‑Earth‑orbit infrastructure and lunar missions. Clay Travis connects these developments to broader trends in media, technology, and artificial intelligence, noting how formerly separate industries are rapidly converging into a single interconnected ecosystem. Have You Noticed this About Epstein? Clay Travis and Buck Sexton Show is anchored by an extended, in‑depth discussion of the latest Jeffrey Epstein document release, with Clay Travis and Buck Sexton analyzing the significance of more than three million pages of emails and records made public. The hosts argue that the Epstein story has effectively reached its endpoint, contending that the newly released materials do not reveal criminal evidence against additional high‑profile figures. They frame Epstein primarily as a wealthy facilitator who leveraged access to attractive, of‑age women to ingratiate himself with powerful, older men, rather than uncovering a broader, prosecutable conspiracy. The conversation includes discussion of reputational damage suffered by public figures named in the emails, distinctions between criminal conduct and morally questionable behavior, and why federal investigators typically do not release non‑criminal but embarrassing communications. Clay and Buck also address listener skepticism, calls into the show, and questions surrounding Ghislaine Maxwell’s conviction, emphasizing that her charges centered on trafficking for Epstein specifically, not a wider group of clients. Where is Nancy Guthrie? A major developing news story involving the disappearance of Savannah Guthrie’s mother, Nancy Guthrie, in Arizona. Clay and Buck carefully walk through the known facts, including her age, physical limitations, and the troubling indicators surrounding the case, such as reports of blood at the scene. They caution against assuming the incident is connected to Savannah Guthrie’s celebrity, drawing comparisons to other tragic but random crimes involving relatives of famous individuals, including the murder of Michael Jordan’s father. The hosts stress that, based on available information, the case appears to be a serious and concerning missing‑person investigation rather than a targeted kidnapping, while urging listeners in Arizona to stay alert as law enforcement updates emerge. The tone shifts as Hour 2 moves into cultural commentary, beginning with a critique of the Grammy Awards and what Clay and Buck describe as its overtly political and “woke” messaging. They focus in particular on Billie Eilish’s statement that “no one is illegal on stolen land,” which sparks a broader discussion about celebrity activism and perceived hypocrisy. Clay highlights the response from the Tongva tribe, which publicly asserted that Billie Eilish’s Los Angeles mansion sits on their ancestral land and suggested she return the property if she truly believes her statement. The hosts use the moment to question performative politics in Hollywood and whether celebrities are willing to apply their rhetoric to their own personal wealth and property. Clay's Controversial Music Take Buck Sexton reports that the United States has shot down a suspected Iranian drone approaching a U.S. aircraft carrier, using the development to discuss the evolving nature of modern naval warfare. Buck explains how drone technology, hypersonic missiles, and ship‑killing capabilities are reshaping global military strategy, potentially turning aircraft carriers into high‑value targets in future conflicts. This segment underscores broader geopolitical tensions involving Iran, U.S. military readiness, and the changing balance of power in international security. The hour then pivots back to urgent domestic news, with continued updates on the disappearance and apparent abduction of Nancy Guthrie, the mother of Today Show co‑host Savannah Guthrie. Clay and Buck relay that the FBI is now involved, there is no surveillance footage, and authorities believe she was taken against her will in Tucson, Arizona. Emphasizing that this is one of the top stories on national newscasts, the hosts urge listeners—especially those in Arizona—to contact the FBI with any tips. They stress that there is limited verified information available and avoid speculation, framing the situation as a troubling and unresolved missing‑person case. Following the serious news, Hour 3 takes a sharp tonal turn into what becomes the most talked‑about and interactive segment of the entire program: Clay Travis’s declaration that Taylor Swift is the “modern‑day Beatles.” Clay doubles down on his cultural take, arguing that Taylor Swift’s songwriting catalog, longevity, and stadium‑selling power will endure for decades, much like The Beatles, while Buck Sexton strongly disagrees. The debate quickly ignites a flood of listener reaction, with calls, emails, and talkbacks pouring in from across the country. Listeners challenge the comparison, propose alternative analogies—such as Taylor Swift being more akin to Elvis or Madonna—and passionately defend or reject Clay’s argument. Make sure you never miss a second of the show by subscribing to the Clay Travis & Buck Sexton show podcast wherever you get your podcasts! ihr.fm/3InlkL8 For the latest updates from Clay and Buck: https://www.clayandbuck.com/ Connect with Clay Travis and Buck Sexton on Social Media: X - https://x.com/clayandbuck FB - https://www.facebook.com/ClayandBuck/ IG - https://www.instagram.com/clayandbuck/ YouTube - https://www.youtube.com/c/clayandbuck Rumble - https://rumble.com/c/ClayandBuck TikTok - https://www.tiktok.com/@clayandbuck YouTube: https://www.youtube.com/@VerdictwithTedCruzSee omnystudio.com/listener for privacy information.

    Optimal Finance Daily
    3446: Today Is the Day to Improve Your Finances! TODAY! by Christina Browning of Our Rich Journey

    Optimal Finance Daily

    Play Episode Listen Later Feb 4, 2026 9:59


    Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3446: Christina Browning offers a motivating call to action for anyone looking to improve their financial health, starting today. With practical steps like budgeting, automating bills, and tackling debt, she shows how small, immediate actions can lead to long-term financial independence. Read along with the original article(s) here: https://www.ourrichjourney.com/post/improve-your-finances-today Quotes to ponder: "Today is the day to improve your finances! TODAY!" "A budget isn't a static document that you create once, pat yourself on the back for creating it, and then quietly file it away somewhere between your expired To Do List and last year's taxes." "Michael Jordan once said, 'You miss 100% of the shots you don't take.'" Learn more about your ad choices. Visit megaphone.fm/adchoices

    Joe Benigno and Evan Roberts
    NFL Coaching Bingo Results and Who's Fired First?

    Joe Benigno and Evan Roberts

    Play Episode Listen Later Feb 4, 2026 16:27


    The guys go back to the prediction desk and grade their NFL head coaching “bingo” cards now that the jobs are filled. Sean comes out on top, Evan salvages a couple, and Tiki somehow goes 0-for-10. Then the conversation shifts to the real fun question: which of these new hires is the first one fired? The Browns and Raiders dysfunction debate gets heated, Tom Brady's Raiders influence comes up, and the segment detours into a Michael Jordan ownership discussion before taking a caller who jumps into Yankees lineup controversy with Trent Grisham vs Jasson Domínguez.

    小人物上籃
    小人物上籃-霹靂鍵盤#213 當晴景變成我的 Jordan!從 HBL 到職業聯賽的場邊視角 feat. 民視體育主播暐喆 02/02/2026

    小人物上籃

    Play Episode Listen Later Feb 4, 2026 165:16


    養大一個孩子需要全村之力,照顧老人也需要整個社會同行,在變老的路上,伊甸希望能成為長輩和照顧者的夥伴。從日間照顧、居家服務到家庭照顧者支持者服務,邀你和伊甸一起用愛,讓長輩們可以安心、放心、開心生活。 https://fstry.pse.is/8lahd5 —— 以上為 Firstory Podcast 廣告 ——

    Optimal Finance Daily - ARCHIVE 1 - Episodes 1-300 ONLY
    3446: Today Is the Day to Improve Your Finances! TODAY! by Christina Browning of Our Rich Journey

    Optimal Finance Daily - ARCHIVE 1 - Episodes 1-300 ONLY

    Play Episode Listen Later Feb 4, 2026 10:29


    Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3446: Christina Browning offers a motivating call to action for anyone looking to improve their financial health, starting today. With practical steps like budgeting, automating bills, and tackling debt, she shows how small, immediate actions can lead to long-term financial independence. Read along with the original article(s) here: https://www.ourrichjourney.com/post/improve-your-finances-today Quotes to ponder: "Today is the day to improve your finances! TODAY!" "A budget isn't a static document that you create once, pat yourself on the back for creating it, and then quietly file it away somewhere between your expired To Do List and last year's taxes." "Michael Jordan once said, 'You miss 100% of the shots you don't take.'" Learn more about your ad choices. Visit megaphone.fm/adchoices

    Dis Dat with My Cousin Vlad
    Episode 280: Flicked in the Balls (Publicly)

    Dis Dat with My Cousin Vlad

    Play Episode Listen Later Feb 3, 2026 71:14


    Vlad gets flicked in the balls publicly, breaks down the various characters in a group of males, tries to help a 36 year old find a Mrs, predicts the future is rife with drug addicts and laziness & rants about why elective pain is the way. DNA DISTILLERY (AWARD WINNING RAKIJA)Award winning Rakija company with immaculate celebratory beverages. Check out the entire range on the below websites, order a tasting pack or some of their flagship, amazing rakija today!https://www.dnadistillery.comCARDSTRIKE! Amazing Basketball cards, Michael Jordan memorabilia and everything collectable sports card buying and selling!!!https://www.cardstrike.com.auROYAL STACKS! (IMMACULATE BURGERS)Melbournes Greatest Burgers! Royal Stacks is a booming burger chain in Victoria with classic burgers, shakes and more, with a 90s vibe and high quality food! https://www.royalstacks.com.auMETROPOLITAN STONE (Kitchens, Cabinets, Laundry, All Cabinets)We have a combined 30 years experience in the cabinet making industry in Victoria! Everything from small projects to large projects!Benchtop change overs, Kitchen facilities, Kitchens, Laundries, Bathroom cabinets, T.V units, Wardrobes etc!MENTION: VLADContact: MATT 0425797488Matthew@metropolitanstone.com.auhttp://www.metropolitanstone.com.auORANGE LEGAL GROUP (Specialising in Property law for purchasing and selling, conveyancing, in-house Mortgage broker & Chartered Account! One stop shop for ALL property needs! Wrap! FREE Contract reviews for buyers before purchasing property!Mention VLAD!https://www.orangelegalgroup.com.auEmail: property@orangelegalgroup.com.auContact: mycousinvlad@gmail.comhttp://www.instagram.com/mycousinvladSend Vlad a Text MessageSupport the showBE GOODDO GOODGET GOOD

    The Dreamerspro Show
    Dillon Brooks Hints at NBA Player PED Use, Michael Jordan Explains Why He Was Mentally Tougher Than LeBron, Draymond Green Accuses Ref of Racism, NBA Snubs Kawhi for LeBron, Reggie Miller Embarrasses Himself After LeBron Dunk

    The Dreamerspro Show

    Play Episode Listen Later Feb 2, 2026 41:41


    Dillon Brooks Hints at PED Use After Paul George Suspension, Michael Jordan Explains Why He Was Mentally Tougher Than LeBron James With Hecklers, Draymond Green Accuses a Ref of Racism, NBA Snubs Kawhi for LeBron, Reggie Miller Embarrasses Himself After LeBron Dunk Download the PrizePicks app today and use code CLNS and get $50 instantly when you play $5! Learn more about your ad choices. Visit megaphone.fm/adchoices

    Shoe-In
    #515 Designing for the Court: Creating Footwear That Elevates the Game With Wilson W. Smith III

    Shoe-In

    Play Episode Listen Later Feb 2, 2026 32:01


    What's it like designing shoes for Michael Jordan? In this episode of Behind the Design: A Masterclass in Sneaker Design, powered by Jones + Vining, legendary Nike designer Wilson W. Smith III joins Matt Priest to unpack the creative process behind crafting signature footwear for iconic athletes. From MJ's relentless drive and attention to detail to Wilson's lessons in listening, humility, and passion, this deep dive blends design, storytelling, and basketball like never before. With special guest: Wilson W. Smith III, Footwear Design Director Hosted by: Matt Priest

    The Odd Couple with Chris Broussard & Rob Parker
    Best of The Odd Couple

    The Odd Couple with Chris Broussard & Rob Parker

    Play Episode Listen Later Jan 30, 2026 32:41 Transcription Available


    Rob and Kelvin use the Bill Belichick Hall of Fame snub as a backdrop for discussing what levels of cheating we find acceptable these days, and tell us why they were so disappointed by Michael Jordan’s contributions to NBC. Plus, FOX Sports NFL writer Ralph Vacchiano swings by to discuss all the fallout from Bill Belichick’s Hall of Fame snub.See omnystudio.com/listener for privacy information.

    The Odd Couple with Chris Broussard & Rob Parker
    Hour 3 – Michael Jordan Let Us All Down & An Ode to Bill Belichick

    The Odd Couple with Chris Broussard & Rob Parker

    Play Episode Listen Later Jan 30, 2026 38:41 Transcription Available


    Rob and Kelvin tell us why they were so disappointed by Michael Jordan’s contributions to NBC, and unveil a new song in honor of Bill Belichick’s Hall of Fame snub. Plus, the guys go head-to-head in this week’s NBA Slam Dunk Contest edition of Teichert’s Tower of Trivia.See omnystudio.com/listener for privacy information.

    Joe Benigno and Evan Roberts
    Evan Questions the LeBron MSG Hype

    Joe Benigno and Evan Roberts

    Play Episode Listen Later Jan 30, 2026 23:20


    Evan finally tapping out on winter, then immediately pivots into the strangest market story in New York sports right now: why Los Angeles Lakers at New York Knicks at Madison Square Garden has turned into a premium ticket that is somehow bigger than recent LeBron visits and even other historic regular-season moments. Evan challenges the idea that this is about a “last chance” to see LeBron James at MSG, arguing there is no real Garden legacy the way Michael Jordan had, and questioning why the cheaper Barclays Center option does not scratch the same itch. Then the twist: a caller insists the real draw is Luka Dončić, not LeBron, which sends the crew into a full LeBron vs Luka demand debate, including the “iconic player, iconic venue” argument and whether paying that kind of money is actually worth it years later. From there the segment veers into pure Nets anxiety. A caller asks for a sanity check on reports and speculation linking Giannis Antetokounmpo to the Brooklyn Nets, and Evan goes off: if the Nets chase Giannis right now while tanking, he wants heads to roll. He breaks down what is real reporting versus dot-connecting, why offseason timing matters, and why Brooklyn should only be involved as a facilitator for assets, not as a desperate buyer. The rant brings back an all-time flashback to Evan's on-air reaction to the James Harden trade, and he warns that a Giannis move would make that meltdown look tame.

    Joe Benigno and Evan Roberts
    Jets Gruden Rumors and Islanders Fans Take

    Joe Benigno and Evan Roberts

    Play Episode Listen Later Jan 30, 2026 8:03


    The conversation picks up on the Jon Gruden storyline with Evan laying out why, if the idea truly came from ownership, it would signal just how desperate the New York Jets might be. Evan argues that if Jon Gruden ever returns to coaching, it would take both money and real power, and the Jets might be the only team willing to hand him the keys to the franchise. The crew also discusses the unspoken reality that any team openly embracing Gruden could face quiet backlash from the rest of the league, even if fans never see it publicly. From there, the phones flip the segment toward hockey and New York sports pride. Callers gush over Matthew Schaefer, with bold claims that he is already the new king of New York sports. Evan pushes back on the hype, agreeing Schaefer is special but cautioning against crowning dynasties after a few months, which turns into a heated back and forth about prospects, sure things, and why fans always protect their own guys. The segment closes with a classic Knicks history argument as callers debate who really kept the New York Knicks from winning a championship in the 1990s. Evan breaks down why both Michael Jordan and Hakeem Olajuwon were roadblocks, but draws a clear distinction with LeBron James, arguing LeBron never haunted the Knicks the way Jordan did, and that the lingering resentment toward him says more about fan emotion than actual history.

    The Dreamerspro Show
    Rob Parker goes nuclear after hearing LeBron James' excuses for not playing 82 games, Lakers trade deadline update, and Gilbert Arenas stirs controversy by claiming LeBron is the reason Michael Jordan stays relevant

    The Dreamerspro Show

    Play Episode Listen Later Jan 30, 2026 28:59


    Rob Parker goes nuclear after hearing LeBron James' excuses for not playing 82 games, Lakers trade deadline update, and Gilbert Arenas stirs controversy by claiming LeBron is the reason Michael Jordan stays relevant Download the PrizePicks app today and use code CLNS and get $50 instantly when you play $5! Learn more about your ad choices. Visit megaphone.fm/adchoices

    Bernstein & McKnight Show
    Ben Verlander likes this Cubs' roster, hopes they aren't done adding to it (Hour 4)

    Bernstein & McKnight Show

    Play Episode Listen Later Jan 29, 2026 40:35


    In the final hour, Marshall Harris and Mark Grote were joined by Flippin' Bats podcast host Ben Verlander to discuss the Cubs' additions this offseason and the latest MLB storylines. After that, Harris and Grote listened to Inside the NBA analyst Charles Barkley share his disappointment in the legendary Michael Jordan failing to appear on NBC more often after he was hired as a special contributor for the network's coverage of the NBA.

    The Final Lap Weekly - NASCAR Talk Show
    We're Back! Talking NASCAR Lawsuit, New Playoffs, & Bowman Gray

    The Final Lap Weekly - NASCAR Talk Show

    Play Episode Listen Later Jan 29, 2026 31:03


    Toby and Kerry are back for 2026, now on the Bleav Network! On tap today is the HUGE NASCAR vs. 2311 Lawsuit with Michael Jordan and Denny Hamlin, Toby was IN the courtroom! NASCAR's ALL NEW Playoff System for 2026, and will we see NASCAR racing at Bowman Gray Stadium this weekend for The Clash? Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mully & Haugh Show on 670 The Score
    We want Michael Jordan to show up more in Chicago (Hour 4)

    Mully & Haugh Show on 670 The Score

    Play Episode Listen Later Jan 29, 2026 34:53


    In the final hour, Mike Mulligan and David Haugh were joined by NFL Network reporter Stacey Dales to preview the Patriots-Seahawks matchup in the Super Bowl and to detail why the Bears' success in 2025 should be sustainable under head coach Ben Johnson's leadership. Later, Mully and Haugh discussed how they wished that the legendary Michael Jordan was around the Bulls and in Chicago more often.

    Behind Your Back Podcast with Bradley Hartmann
    507 :: What a Leadership Book Featuring Jordan, Messi, & Bear Bryant Taught Me About Ambition and Sacrifice

    Behind Your Back Podcast with Bradley Hartmann

    Play Episode Listen Later Jan 29, 2026 15:38


    Are you trading time, health, or relationships in pursuit of your goals—and not even realizing it?   In this episode, Bradley Hartmann shares a powerful non-obvious leadership book, The Cost of These Dreams by Wright Thomson, that reframes ambition, success, and the unseen sacrifices we make to chase big dreams.    Through stories of iconic figures like Michael Jordan, Lionel Messi, Urban Meyer, Pat Riley, and Dan Gable, we explore what it really costs to lead at a high level—and whether we're willing to pay that price.   In this episode you will:   Discover how world-class performers balance — and often fail to balance — success and self. Learn how ambition, if unchecked, can erode the very things you're working to protect. Reflect on your own leadership path and how to pursue excellence without losing yourself in the process.   Listen now to discover what elite competitors reveal about success, sacrifice, and how to lead with greater clarity and intention in the construction industry.   At Bradley Hartmann & Company, we help construction teams improve sales, leadership,  and communication by reducing miscommunication, strengthening teamwork, and bridging language gaps between English and Spanish speakers. To learn more about our product offerings, visit bradleyhartmannandco.com.   The Construction Leadership Podcast dives into essential leadership topics in construction, including strategy, emotional intelligence, communication skills, confidence, innovation, and effective decision-making. You'll also gain insights into delegation, cultural intelligence, goal setting, team building, employee engagement, and how to overcome common culture problems—whether you're leading a crew or managing an entire organization.   Have topic ideas or guest recommendations? Contact us at info@bradleyhartmannandco.com.   New podcasts are dropped every Tuesday and Thursday.     This episode is brought to you by The Construction Spanish Toolbox —the most practical way for construction teams to learn jobsite-ready Spanish in just minutes a day over 6 months.          

    The Pat McAfee Show 2.0
    PMS 2.0 1496 - Bill Belichick Not Elected to the Hall of Fame, Brock Lesnar, JJ Watt, Jimmy Johnson, Mike Tirico, AQ Shipley, Darius Butler, & AJ Hawk

    The Pat McAfee Show 2.0

    Play Episode Listen Later Jan 28, 2026 169:53


    On today's show, Pat, AQ Shipley, Darius Butler, AJ Hawk, and the boys react to the news that Bill Belichick was somehow not elected to the Pro Football Hall of Fame, calling into question whether or not it matters if you're a first ballot Hall of Famer, and why there needs to be a change to the system. AQ also breaks down the trenches in the Super Bowl and why he thinks whoever is more dominant on the defensive line will win the game. The boys are also joined by several incredible guests including Brock Lesnar who announces his intentions to enter the Royal Rumble, JJ Watt, Hall of Famer Jimmy Johnson to talk about his disgust of Bill Belichick not being a first ballot Hall of Famer, and lastly, Mike Tirico to chat about calling the Super Bowl, the Olympics, his conversation with Michael Jordan, and more. Make sure to subscribe to youtube.com/thepatmcafeeshow or watch on ESPN (12-2 EDT), ESPN's Youtube (12-3 EDT), or ESPN+. We appreciate the hell out of all of you, we'll see you tomorrow. Cheers. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The Learning Leader Show With Ryan Hawk
    672: Brad Stulberg - The Neuroscience of Curiosity, Process vs. Outcome Goals, The Power of Consistency, Playing Like The Beatles, Focusing on Your WHO, and The Way of Excellence

    The Learning Leader Show With Ryan Hawk

    Play Episode Listen Later Jan 26, 2026 71:32


    Go to www.LearningLeader.com to learn more This is brought to you by Insight Global. If you need to hire one person, hire a team of people, or transform your business through Talent or Technical Services, Insight Global's team of 30,000 people around the world has the hustle and grit to deliver. www.InsightGlobal.com/LearningLeader My guest: Brad Stulberg is a bestselling author and leading expert on sustainable performance and well-being. He's written for The New York Times, Outside Magazine, and The Atlantic, and his previous books include Peak Performance and The Practice of Groundedness. His latest book, The Way of Excellence, is great. Brad's writing combines cutting-edge science, ancient wisdom, and stories from world-class performers to help people do their best work without losing themselves in the process. Notes: Never pre-judge a performance. When you're feeling tired, uninspired, or off your game, show up anyway. Remember the Beatles scene—they looked bored and exhausted, but Paul still wrote "Get Back" that day. You don't know what's possible until you get going. Discipline means doing what needs to be done regardless of how you feel. As powerlifter Layne Norton says, we don't need to feel good to get going... We need to get going to give ourselves a chance to feel good. Stop waiting for motivation. Start moving and let the feeling follow. Audit who you're surrounding yourself with. The Air Force study is striking: the least fit person in your squadron determines everyone else's fitness level. If you sit within 25 feet of a high performer at work, your performance improves 15%. Within 25 feet of a low performer? It declines 30%. Your environment isn't neutral... Choose wisely. Treat curiosity like a muscle. It's a reward-based behavior that gets stronger with use. When Kobe said he played "to figure things out," he was tapping into the neural circuitry that makes learning feel good and builds upon itself. Ask more questions. Stay curious about your craft. Excellence isn't about perfection or optimization... It's about mastery and mattering. It's about showing up consistently, surrounding yourself wisely, and staying curious along the way. To the late Robert Pirsig - one of the greatest blessings and joys and sources of satisfaction in my life is to be in conversation with your work. He's the author of Zen and the Art of Motorcycle Maintenance— "gumption is the psychic gasoline that keeps the whole thing going." Arrogant people are loud. Confident people are quiet. Confidence requires evidence. The neural circuitry associated with curiosity is like a muscle: it gets stronger with use. Curiosity is what neuroscientists call a reward-based behavior. It feels good, motivates us to keep going, and builds upon itself. Kobe didn't play to win. He played to learn and grow. Kobe Bryant said he didn't play not to lose, and he didn't even play to win. He played to learn and to grow. He said the reason he did that is because it's so much more freeing. If you're really trying not to lose, you're going to be tight. If you're really trying to win, you're going to be tight. But if you're just out there to grow, you're going to be in the moment. When you're in the moment, you give yourself the best chance of having the performance you want. The word compete comes from the Latin root word com, which means together, and petere, which means to seek, rise up, or strive. In its most genuine form, competition is about rising together (Caitlin Clark's story against LSU). Love: The Detroit Lions had just won their first playoff game in 32 years. Following the game was a scene of pure jubilation. During a short break from the celebrating, the head coach, GM, and quarterback all gave brief speeches. Which collectively lasted about 2 minutes. During those 2 minutes, the word LOVE was repeated 7 times. Homeostatic regulation -- Sense it in the greatness of others and when you're at your best. What Brad calls "excellence." Surround yourself with people who have high standards. When things don't go your way, when you're inevitably heartbroken or frustrated, it's the people around you, the books you read, the art around you, the music you listen to, that's the stuff that speaks to you and keeps you going. It keeps you on the path even amidst the heartbreak. Process goals work better than outcome goals for most people. If you're an amateur, you should be process-focused. When I train for powerlifting, I don't think about the meet that I'm training for. I think about showing up for the session today. If I think about the meeting, I get anxious, and my performance goes down. But if you're Steph Curry and you've been doing your thing for 20 years, you can think about winning the gold medal because your process is so automatic. For 99% of people, focus on the process. "Brave New World" turns fear into curiosity. When you walk up to a bar loaded with more weight than you've ever touched, there can be fear about what it's going to feel like. If you go up to the bar with fear, you're going to miss the lift. If you're convinced you're going to make it, you'll make it, but your nervous system knows when you're lying to yourself. The middle ground is curiosity. Instead of saying "that's heavy, it's scary," I say "Brave New World. I've never touched this weight before. I have no idea what's going to happen, but let's find out." It splits the difference. I'm hyped, I'm giving myself a chance, I'm not lying to myself, but I'm also not scared. Curiosity and fear cannot exist at the same time in the brain. There are seven pathways in the brain defined by affective neuroscientist Jaak Panksepp. Two of those pathways are the rage/fear pathway and the seeking/curiosity pathway. These pathways cannot be turned on at the same time. They compete for resources. It's a zero sum game. You cannot simultaneously be raging and curious. You cannot be terrified and curious at the same time. If you get into a mindset of curiosity, it's extremely hard to be angry or terrified. By being curious, we turn off the fear deep in our brains and give ourselves a chance to perform our best. Practice curiosity in lower-consequence situations first. Curiosity is like a muscle. If you're about to do something absolutely terrifying and you're really scared and you say, "I'm just going to be curious," you know you're lying to yourself. You have to practice in lower-consequence situations first. When you, as a paren,t get really upset with your kid, try to be curious about their experience. Watch your anger calm down. When you as a leader, have a board presentation where you're feeling anxious, try to have that mindset of "Brave New World." When you're an athlete going into a big game obsessing about what could go wrong, try to be really curious instead. The best competitors have emotional flexibility. As a competitor, you would know that in the confines of the game, you're not singing Kumbaya, you are trying to kill them. Then you have the emotional flexibility the minute that game ends to respect them as a person. That is the best way to compete. That's when our best performances happen. It's not either/or, it's both/and. It's playing really hard, giving everything you can for the win, seizing on your opponent's vulnerability, at the same time as having deep respect for them. You don't have to be miserable to be excellent. There are people like David Goggins or Michael Jordan who seem motivated by anger and a chip on their shoulder. But Jordan would put his tongue out like this primal expression of joy when he was about to dunk. And Jordan won all his championships while being coached by Phil Jackson, the Zen master of compassion. There are the Steph Currys of the world, or Courtney Dauwalter (best ultra marathoner to ever exist), or Albert Einstein (total mystic who had so much fun in his work). There are two ways to the top of the mountain. For 99.999% of people, you end up performing better with fun and joy, and you have so much more satisfaction, which contributes to longevity. The best leaders take work seriously but laugh at themselves. The best leaders I know in the corporate world, they take the work so seriously. They are so intense. But my God, do they laugh at themselves and their colleagues and have fun. Reflection Questions Brad says, "The things that break your heart are the things that fill your life with meaning." What are you currently holding back from caring deeply about because you're afraid of getting hurt? What would it look like to step fully into that arena despite the risk of heartbreak? The Air Force study showed that sitting within 25 feet of a low performer decreases your performance by 30%. Honestly assess who you're spending the most time with right now. Are they raising your standards or lowering them? What specific change could you make this month to shift your environment? Brad uses "Brave New World" to turn fear into curiosity before big challenges. Think of something coming up that makes you anxious. Instead of trying to convince yourself you'll succeed or dwelling on the fear, what does it feel like to approach it with pure curiosity: "I've never done this before. Let's find out what happens."

    The Dan Le Batard Show with Stugotz
    The Big Suey: The Jewban Dan Le Batard (feat. David Samson & Adnan Virk)

    The Dan Le Batard Show with Stugotz

    Play Episode Listen Later Jan 22, 2026 41:20


    "Can someone tell me what the hell Hamnet is, baby?" David Samson is here to discuss UM's devastation, Michael Jordan's 'suspension,' and the Dolphins hire, but before we get to The Oscar Nominations, Adnan crashes in to completely hijack the segment. Learn more about your ad choices. Visit podcastchoices.com/adchoices