Podcasts about Jumping

Form of movement in which an organism or mechanical system propels itself into the air

  • 8,282PODCASTS
  • 11,950EPISODES
  • 42mAVG DURATION
  • 2DAILY NEW EPISODES
  • Mar 2, 2026LATEST
Jumping

POPULARITY

20192020202120222023202420252026

Categories




Best podcasts about Jumping

Show all podcasts related to jumping

Latest podcast episodes about Jumping

The Maverick Show with Matt Bowles
378: From Almost Jumping to Building a Life Across Continents: Cara Laban on Depression, Travel & Reinvention

The Maverick Show with Matt Bowles

Play Episode Listen Later Mar 2, 2026 52:13


Learn how Cara stopped escaping and designed a sustainable world travel lifestyle that works for her brain. ============================ Get the Monday Minute my weekly email with 3 personal recs for travel, culture, and living beyond borders you can read in 60 seconds. ============================ ON THIS EPISODE Cara Laban left New York chasing a childhood dream of Australia, but what began as an escape from burnout turned into a two-year immersion in life overseas, from bar shifts in Melbourne to bakery work in the outback. In this candid conversation, she shares the mental health lows that followed her across continents, the road-trip mishap that left her stranded without gas in remote Australia, and the moment in Thailand when she realized travel alone wouldn't fix what she was carrying internally. We trace her evolution from working local hospitality jobs to building a sustainable travel lifestyle online after discovering Hannah Dixon's Virtual Excellence Academy. That reinvention eventually led her to found Travel Reddi, an AI-powered travel logistics platform designed to simplify global mobility. Cara opens up about depression on the road, the difference between analog and digital nomad life, and how systems thinking became the foundation of both her business and her forthcoming book: “How to Do Anything Even if You're Lazy”. This is a story about reinvention, self-honesty, and designing a life overseas that actually works for your brain. → Full show notes with direct links to everything discussed are available here. ============================ FREE RESOURCES FOR YOU: See my Top 10 Apps For Digital Nomads See my Top 10 Books For Digital Nomads See my 7 Keys For Building A Remote Business (Even in a space that's not traditionally virtual) Watch my Video Training on Stylish Minimalist Packing so you can join #TeamCarryOn See the Travel Gear I Use and Recommend See How I Produce The Maverick Show Podcast (The equipment, services & vendors I use) ============================ ENJOYING THE SHOW? Follow The Maverick Show on Instagram and DM Matt to continue the conversation Please leave a rating and review — it really helps the show and I read each one personally You can buy me a coffee — espressos help me produce significantly better podcast episodes! :)

7 Figure Flipping with Bill Allen
[860] When Knowledge Becomes the Bottleneck

7 Figure Flipping with Bill Allen

Play Episode Listen Later Feb 27, 2026 23:30


Let me tell you a harsh truth…You don't fail because you don't know enough.You fail because you know too many things, and execute none of them long enough.You're constantly switching strategies.Jumping from one idea to the next.Consuming instead of committing.Nothing compounds because nothing stays in place long enough to work.But there is a solution, and that's exactly what Lindsay and I talked about in today's episode.We break down how to cut through the noise, pick one strategy, and execute it long enough for it to actually compound. And if you're looking to pick one strategy and actually commit to it, you must join us at the 2-Day Flip Funding Challenge.In 2 focused days, we help you install a repeatable private capital strategy you can execute for the next 90 days, without chasing money deal to deal.CLICK HERE to register for the 2-Day Flip Funding Challenge >>Catch you later!LINKS & RESOURCESPrivate Lending for Real Estate InvestorsConventus offers personalized, white-glove service tailored specifically to real estate investors. CLICK HERE: https://conventus-apply.formstack.com/forms/7_figure_flipping_mk7 Figure Flipping UndergroundIf you want to learn how to make money flipping and wholesaling houses without risking your life savings or "working weekends" forever... this book is for YOU. It'll take you from "complete beginner" to closing your first deal or even your next 10 deals without the bumps and bruises most people pick up along the way. If you've never flipped a house before, you'll find step-by-step instructions on everything you need to know to get started. If you're already flipping or wholesaling houses, you'll find fast-track secrets that will cut years off your learning curve and let you streamline your operations, maximize profit, do MORE deals, and work LESS. CLICK HERE: https://hubs.ly/Q01ggDSh0 7 Figure RunwayFollow a proven 5-step formula to create consistent monthly income flipping and wholesaling houses, then turn your active income into passive cash flow and create a life of freedom. 7 Figure Runway is an intensive, nothing-held-back mentoring group for real estate investors who want to build a "scalable" business and start "stacking" assets to build long-term wealth. Get off-market deal sourcing strategies that work, plus 100% purchase and renovation financing through our built-in funding partners, a community of active investors who will support and encourage you, weekly accountability sessions to keep you on track, 1-on-1 coaching, and more. CLICK HERE: https://www.7figureflipping.com/runway Connect with us on Facebook and Instagram: @7figureflipping Hosted on Acast. See acast.com/privacy for more information.

The LinkedIn Branding Show
How to Navigate LinkedIn If You're Jumping In From Other Social Media Sites

The LinkedIn Branding Show

Play Episode Listen Later Feb 26, 2026 13:59


IN THIS EPISODE:New to LinkedIn or back from a years ago? LinkedIn is a whole new world and the place to be. This episode shares what to know, where to start and how to navigate the platform to get comfortable, start posting and the first place to start to get ahead.CONTACT US:Michelle J Raymond is a globally recognized LinkedIn™️ for business growth speaker, author and consultant. Her services – audit & strategy, LinkedIn training and LinkedIn profile rewrites.LinkedIn:⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/michellejraymond/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://b2bgrowthco.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Michelle B. Griffin is a TEDx speaker and personal brand + PR strategist who helps women experts become recognized authorities and thought leaders in their industries.As the founder of Brand Leaders and creator of the Own Your Lane™ Recognition Roadmap + She's Visible™ women's leadership program, Michelle equips professionals to position their personal brands for recognition, media opportunities and industry impact.LinkedIn:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/michellebgriffin/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Websites: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://michellebgriffin.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠OwnYourLane.io⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Buy your copy on AmazonThe LinkedIn Branding Book, The Power of Two: Build Your Personal and Business Brand on LinkedIn for Exponential Growth -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://mybook.to/The_LinkedIn_Branding_Book⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://MichelleSquared.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠OUR BOOKS⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The LinkedIn Branding Book + Workbook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Position Yourself Personal Branding Planner⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Business Gold: LinkedIn Company Pages⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠SUBMIT YOUR QUESTION:Simply DM both Michelles on LinkedIn to submit your question for a future episode.LINKSBE A STANDOUT WITH THE OWN YOUR LANE™ PODCASTPosition Your Brand. Build Visible Authority.If you're enjoying The LinkedIn Branding Show, you'll love Michelle Griffin's solo podcast, Own Your Lane™ the personal branding and PR podcast for women experts and leaders ready to be chosen.Learn how to sharpen your positioning, build visible authority, and get selected for speaking, media, podcasts, and even AI search.Listen here POWER HOURFix What's Holding Your LinkedIn BackIf LinkedIn feels almost right but something is still holding you back, Michelle J Raymond's Power Hour helps you gain clarity, direction, and a practical plan to fix exactly what's not working.  Book here - ⁠https://b2bgrowthco.com/power-hour-session/

The Momlife Mindset
Episode 216: GLP-1's and Peptides Are NOT Magic; Why Jumping in Too Soon Will Cost You!

The Momlife Mindset

Play Episode Listen Later Feb 26, 2026 20:01


Are peptides the best answer to weight loss, inflammation and hormone balance - or are they being completely misused?Let's be real - the conversation around peptides is exploding in the wellness space right now. From GLP-1 medications to recovery peptides, gut healing protocols, immune support and fat loss optimization. There's a lot of noise alongside a lot of misinformation.But here's the truth that isn't being shared enough:

Eminent Americans
Jonathan Lear, Local Exemplar

Eminent Americans

Play Episode Listen Later Feb 26, 2026 85:08


My guest on the show today is Jonny Thakkar. Jonny is an Assistant Professor in Political Science at Swarthmore College and one of the founding editors of The Point. He's the author of various articles, most recently “Beyond Equality” in the newest issue of the Point, and the 2018 book Plato as Critical Theorist.I asked Jonny on to talk about his late friend and mentor the philosopher and psychoanalyst Jonathan Lear, who was his advisor at the University of Chicago Committee on Social Thought and, as you'll hear in our discussion, his occasional advisor on matters of the heart.He wrote about Lear, after his death, along with a collection of other remembrances from friends and colleagues of Lear's:His own career path was so individual as to be impossible to emulate. Institutionally speaking, he had completed two undergraduate degrees, one in history and the other in philosophy, followed by two graduate degrees, the first a Ph.D. on Aristotle's logic under the supervision of Saul Kripke—a prodigy in contemporary logic and metaphysics who was only eight years older than Jonathan, had no expertise in Aristotle and only ever supervised one other dissertation—and the second a professional qualification in psychoanalysis that licensed him to treat patients clinically. His philosophical interlocutors were many and various, among them Plato, Aristotle, Kierkegaard, Freud, Heidegger, Wittgenstein, Williams, J. M. Coetzee and Marilynne Robinson, but he was no dilettante. He wanted to understand what it meant to be human, and he simply followed that question wherever it took him. Without end, I should add: he took up the study of ancient Hebrew in his mid-seventies because he had become so puzzled by the treatment of the prophet Balaam that he wanted to make sure he wasn't missing anything in translation!That ethos of constant self-development was central to what you might call Jonathan's philosophy of life. Some people use the term “perpetual student” pejoratively; for Jonathan, being open to learning from the world was the key to human flourishing. As he told matriculating undergraduates in a 2009 address, “the aim of education is to teach us how to be students.” In the preface to Open Minded, he wrote that achieving tenure at Cambridge in his twenties freed him from professional pressures to such an extent that he was forced to confront the meaning of his own existence. “I realized that before I died, I wanted to be in intimate touch with some of the world's greatest thinkers, with some of the deepest thoughts which humans have encountered. I wanted to think thoughts—and also to write something which mattered to me.”We talk about Lear's work, but also about what it means to be, or be influenced by, what Lear called a “local exemplar,” which is someone who has a profound influence on the people around him or her. An exemplar could be a real mentor in the classic sense, as Lear was for Jonny and other students of his, or a writer who affects other people just through text, which is how he functioned in my life. It could also be someone who just said or did something once or a few times that stays with us, imprints itself on us, and changes us in ways that unfold over time.So we talk about how Lear played that role in our lives, but also about the ways in which Thakkar may be playing the role of local exemplar, as a teacher, in the lives of his students, and more generally what it is about someone, or something, that makes it capable of influencing us in these ways.One reason we ended up in this space, I think, is that I've been wrestling a lot, lately, with the question of how writing does or doesn't influence people, because I'm writing a book, on relationships and therapy, that edges into the territory of self-help, and I've become moderately obsessed with not replicating the mistake that so many self-help books make on this front, which is thinking that in order to help people, the thing to do is give them straightforward advice on how to do or be better.This always seems to me like a fundamental misunderstanding of how texts change people, and in some ways an odd one to make in particular for the therapists and psychologists who write so many of these books. If anyone should understand that the human psyche is tricky and that real change tends be a product of close relationships and communal structures playing out over time, rather than advice distilled to words, it should be therapists.Texts do change people's lives, but it's indirect. They're poetic. They're narrative. They're allusive and elusive. They're not precision tools to achieve a predictable outcome in readers.Lear understood this. I asked him once if the style of his essays was deliberately looping and associative because he was trying to emulate something about the rhythms of psychoanalytic practice, and his response was surprise. I just try to write clearly, he said, and the more I think the more I believe him. I think there was something so integrated in the way he did all these things – teach, write, practice psychoanalysis – that his version of writing clearly became this thing that I perceived as indirect, and that it is because of this, in some sense, that his writing has the capacity to affect people in a way that most self-help literature doesn't.I didn't know Lear well, as a person, but he had, and continues to have, a big influence on me. That's even more the case for Jonny, as you'll hear. I don't think he's for everyone, but if he might be for you, I really encourage you to pick up one of his books or find one of his essays online. I'll drop in some links to a few of below. He was a remarkable person.Hope you enjoy. Peace.Jonathan Lear articles:* “Aims of Education”* “Inside and Outside the Republic”* “A Case for Irony”* “Wisdom Won from Illness” [this is actually the whole text of one of his books]* “Transience and hope: A return to Freud in a time of pandemic”* “Jumping from the Couch: An Essay on Phantasy and Emotional Structure”* “Can the virtuous person exist in the modern world?” This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit danieloppenheimer.substack.com/subscribe

The Ricochet Audio Network Superfeed
Erick Erickson Show: S15 EP37: Hour 2 – Iran, Cuba, and Presidential Decisions

The Ricochet Audio Network Superfeed

Play Episode Listen Later Feb 25, 2026 37:02


Jumping back to Iran, plus more state of the union reaction.

Scaling Up Business Podcast
This Leadership Habit That's Shrinking Your Company

Scaling Up Business Podcast

Play Episode Listen Later Feb 25, 2026 12:27


Are you growing your company or are you keeping it small by being the hero?In this solo episode, Bill unpacks the hidden addiction leaders have to being needed; constantly jumping in, solving problems, and staying essential to everything. While this mindset may help you start a company, it will quietly sabotage your ability to scale it.Topics explored in this episode:(00:03) The Fastest Way to Stay Small *Staying essential to everything keeps your company small *Jumping in to solve every problem prevents team growth *Being the hero steals others' opportunity to level up(03:31) The Addiction to Being Needed *Leaders often love being the hero and solving problems *Hero mindset requires “victims”, which creates weak and dependent teams *Real leadership means developing problem-solvers, not being the problem-solver(09:19) Practical Delegation Shift *Before solving a task, ask: Who should own this? *Assign work based on others' strengths and pride points *Even imperfect recognition builds trust and engagement *Speak specifically to why someone is right for the challengeBill Gallagher, Scaling Coach and host of the Scaling Up Business podcast, is an international business coach who works with C-Suite leaders to achieve breakthrough growth.Join Bill in the Growth Navigator Coaching Program: https://ScalingCoach.com/workshopBill on LinkedIn: https://www.LinkedIn.com/in/BillGallBill on YouTube: https://www.YouTube.com/@BillGallagherScalingCoachVisit https://ScalingUp.com to learn more about Verne Harnish, our team of Scaling Up Coaches, and the Scaling Up Performance Platform, which includes coaching, learning, software, and summit. We share how the fastest-growing companies succeed where so many others fail. We help leadership teams with the biggest decisions around people, strategy, execution, and cash so that they can scale up successfully and beat the odds of business growth.Did you enjoy today's episode? If so, then please leave a review! Help other business leaders discover Scaling Up Business with Bill Gallagher so they, too, can benefit from the ideas shared in these podcasts.Subscribe via Spotify: https://spoti.fi/3PGhWPJSubscribe via Apple Podcasts: https://apple.co/3PKe00uBill on Facebook: https://www.facebook.com/billgall/Bill on Twitter/X: https://x.com/billgall

Wrasslin Talk with Mayor McCall
WWE Legend Jumping Jim Brunzell ( The Killer Bees )

Wrasslin Talk with Mayor McCall

Play Episode Listen Later Feb 24, 2026 56:07


Jim Is A Wrestling Legend and Icon Known to Fans All Across the Globe. He Is A 2 Time NWA Mid-Atlantic Champion. He Has Held Tag Gold In the AWA, NWA, Puerto Rico and The UWF. Along with B. Brian Blair, He Was Part Of One Of The Most Iconic Tag Teams In History : The Killer Bees !!!! Do Not Miss !!!!

The Business of Meetings
311: Overwhelmed is Not a Time Problem, It Is a Leadership Problem!

The Business of Meetings

Play Episode Listen Later Feb 24, 2026 24:21


As a business owner, are you feeling overwhelmed? Eric believes that overwhelm is seldom about having too much to do. It happens when business owners fail to structure their time as a CEO should and instead react emotionally rather than lead strategically. Overwhelm Overwhelm often comes from reacting instead of leading. Jumping in to fix problems, answer clients, and put out fires feels productive, but it keeps you stuck working in the business instead of on it. Responsiveness is often mistaken for leadership, but constant firefighting is not a strategic approach to leading a team. Role Confusion As a business owner, you wear multiple hats. Without clearly defining which role you are playing at any given time, your brain never switches off. Constant mental switching is unsustainable. Blocking time for specific responsibilities will reduce mental clutter and restore focus. Decisions Not Made Unmade decisions accumulate over time. Niche, service scope, pricing, team expectations, and client expectations all require clarity. When they are not addressed at the right time, they pile up. Constantly facing all the decisions that need to be made saps your energy and heightens overwhelm. Doubt Amplification Revenue is a rollercoaster, not a straight line. When challenges arise, doubt surfaces. Questions like "Am I good enough?" or "What if this doesn't work?" begin to amplify. Every entrepreneur faces doubt, but it becomes dangerous when it takes over and paralyses you. Priority Integrity The issue is not time management but priority integrity. Without clear priorities, confusion grows. Business owners have three levels of work: survival work for their clients, stability work on their systems and financial clarity, and growth work on their marketing, sales pipelines, team development, and scalability. Most business owners tend to get stuck at the survival level. Building Value and Freedom Long-term value is created by focusing on growth and building a scalable model. The less involved you are in daily activities, the higher the value of your business. A transferable business must be structured and team-based, rather than relying on a single person to manage all the chaos. Time Blocking Decide what truly matters over the next 90 days, choose one objective that will make your revenue more predictable, and focus on it. Then, create some non-negotiable CEO time- at least two 90-minute blocks per week, with no interruptions. Use that time to strategize, review your pipeline, refine your pricing, design systems, and prepare playbooks. And every night, define three meaningful outcomes for the next day. Remember to focus on the outcomes, not the tasks. Creating Clarity Doubt often shows up when you raise prices, invest in support, delegate responsibility, or start saying no. Growth is uncomfortable, and that discomfort can easily be misinterpreted as a sign that something is wrong. The key is to separate emotion from evidence. Instead of relying on how things feel, look at the data: the size of your pipeline, your conversion rate, your margins, client retention, and your key performance indicators. Build a dashboard, review it consistently, and let the facts guide you. Clarity comes from evidence, not emotion. Energy and Leadership Overwhelm is often a sign that your energy is depleted. Sleep, training, learning, and setting aside uninterrupted focus time are essential. Constant accessibility destroys your ability to think strategically. If you do not have time to think, you will not have time to lead. Practical Actions to Reduce Overwhelm Block your time and focus on priorities. Create a list with five activities to eliminate and three to delegate within 30 days. Build systems for predictable revenue across sales and execution. Focus on what you should delegate so that you can focus on representing the business and maintaining client relationships. Create accountability with your peers through coaching or with a structured review. Overwhelm is often the byproduct of avoiding leadership.  Always remember that high performers don't do it alone. Connect with Eric Rozenberg On LinkedIn Facebook Instagram Website Listen to The Business of Meetings podcast Subscribe to The Business of Meetings newsletter  

Geek Ultimate Alliance
Transformers (2023) Vol 1: Robots In Disguise - A Walk Through The Multiverse Episode 212

Geek Ultimate Alliance

Play Episode Listen Later Feb 24, 2026 30:49 Transcription Available


Jumping back into some comic books, I go over some of my favorite parts of the current title for Transformers coming out of the Image Skybound imprint. This is the second book in the connected Energon Universe, and this is one you definitely need to check out.Support The Alliance On Patreon & Get Ad-Free, Exclusive, Early Episodeshttps://www.patreon.com/guanetworkGeek Ultimate Alliance Network Is Produced By GeekVerse Podcast www.geekverse.caNetwork Schedule Monday: Rangers Alliance Tuesday: A Walk Through The MultiverseWednesday: The Animation Nation Thursday: Star Wars AllianceFriday: Marvel AllianceSaturday: DC AllianceFollow the respective shows on Twitter so when they record live on GeekVerse Podcast Network you can join the chat and add to the conversation!

3AW Breakfast with Ross and John
Who Jon Anderson thinks are greatest jumping ruckmen after hearing Max Gawn's comments

3AW Breakfast with Ross and John

Play Episode Listen Later Feb 24, 2026 1:31


The sports reporter brought up a few of the best ruckmen who preferred jumping in a contest over wrestling.See omnystudio.com/listener for privacy information.

3AW is Football
Who Jon Anderson thinks are greatest jumping ruckmen after hearing Max Gawn's comments

3AW is Football

Play Episode Listen Later Feb 24, 2026 1:31


The sports reporter brought up a few of the best ruckmen who preferred jumping in a contest over wrestling.See omnystudio.com/listener for privacy information.

Moto Flakes
#61 RED CROSS JUMPING

Moto Flakes

Play Episode Listen Later Feb 24, 2026 85:03


Mooooin Leude.Folge 61, da simmer wieder. Viel Spaß beim anhören.Bis bald mal.Heeenre & Toomee

SLP Coffee Talk
Having Fun and Remembering Your Why

SLP Coffee Talk

Play Episode Listen Later Feb 23, 2026 25:32


Hallie chats with Maddie Burrington about having fun in your work and remembering your “why”!In this episode of SLP Coffee Talk, Hallie chats with Maddie Burrington—an elementary school SLP in Dallas and social media creator—about making speech the coolest club in school, setting boundaries to avoid burnout, and remembering your why. Maddie shares her journey from private practice burnout to falling in love with elementary schools, how she creates themed sessions that have kids begging for their turn, and why leaving work at work changed everything. You'll also hear about gratitude journaling, hobbies outside of speech, and building community through relatable content. Whether you're a new CF or a veteran SLP, this conversation is packed with practical tips, real talk, and reminders that you can't pour from an empty cup.Bullet Points to Discuss: Maddie's SLP journey—from grad school through her CF year Making the leap from private practice to elementary schools Jumping into SLP social media and building community Setting boundaries, work-life balance, and hobbies that keep you sane Remembering your why and using gratitude journaling to avoid burnoutHere's what we learned: Themed sessions keep kids engaged—they're working on goals without even realizing it. Setting boundaries early prevents burnout—leave your laptop at work, there are no speech emergencies. Making speech fun creates buy-in—kids should feel like they're in an exclusive cool club. Gratitude journaling helps you reflect and grow—both personally and professionally. Finding community matters—sharing relatable content connects SLPs who understand what you're going through.Learn more about Maddie Burrington: Instagram: https://www.instagram.com/missmaddieslp/TikTok: https://www.tiktok.com/@missmaddieslpHoo.be: https://hoo.be/missmaddie Learn more about Hallie Sherman and SLP Elevate:  

A Love Language Minute
Divorce and Remarriage

A Love Language Minute

Play Episode Listen Later Feb 23, 2026 1:00 Transcription Available


How can you know if you're ready to remarry after a divorce? Jumping immediately into another relationship is not recommended. Realize that remarriage is complicated, especially when there's children involved.Donate to Moody Radio: http://moodyradio.org/donateto/lovelanguageminuteSee omnystudio.com/listener for privacy information.

Sales Is King
210: Craig Bowman | SVP, Trellix

Sales Is King

Play Episode Listen Later Feb 23, 2026 65:04


In this episode of Sales Is King, Dan sits down in the new Midtown Manhattan studio with Craig Bowman, SVP of Public Sector at Trellix and author of the new book Craft: CIA Elite Selling. Craig brings a wild career arc to the mic—from clandestine work with the CIA and the intelligence community to building high‑performing sales teams at Adobe and now leading public sector growth at scale.Craig unpacks how CIA tradecraft, “mission first” thinking, and AI can radically upgrade how you prospect, qualify, and win in complex B2B deals. Key topics coveredThe CIA recruitment story: from a mysterious hotel lobby interview, underground parking garages, and VCR‑filled rooms to landing his first role under commercial cover.Moving from intelligence to entrepreneurship: starting, scaling, and selling his own government contracting company, then returning post‑9/11 for a new mission.Jumping into sales at Adobe: how he was recruited, doubled his salary, and built a new intelligence division by deeply understanding the mission—not just the tech.“In the mud with the customer”: why Craig literally went to the southern border with CBP to understand the mission and coined his mantra about getting in the trenches.Influence maps vs org charts: why the real power sits with the “knuckle‑draggers” in the back of the room, not just the CIO, and how to find and engage true influencers.Frameworks without rigidity: his take on MEDDIC, Challenger, and why you coach the bottom half differently while using top performers as mentors to “shift the middle.”The AI inflection point: how he rewrote his book mid‑stream to integrate AI, and why he now spends 70% of his time using AI agents as a personal chief of staff.Craig's live AI workflow: daily scripts that summarize email, corporate updates, and account intel; auto‑generated dossiers, personas, and value hypotheses. The 90‑Second Takeover: how to send a pre‑meeting hypothesis of value, then open meetings with clarity, validation, and a working session instead of random discovery.Humility as a superpower: the intern experiment that proved “humility emails” beat cold calls, and why genuine curiosity and asking for help unlock meetings.AI from the buyer's side: why your customers are already using AI to shortlist vendors and how you should be using AI the same way to qualify where you can truly win.Metrics that actually matter: the question Craig asks every customer about how they'll measure value 7 months after buying—then how he uses that in MEDDIC the right way.The seven criteria of a successful seller: why he evaluates inputs (character, curiosity, rigor) rather than just outputs (pipeline, quota).Mentors and pivotal leaders: from his grandfather and tough college professor to powerful women leaders in the intelligence community and sales leaders like Ken Karsten.Who this episode is forEnterprise and public sector sellers trying to win complex, multi‑stakeholder deals.Sales leaders looking to blend frameworks like MEDDIC with modern AI and real coaching.Rev leaders who want their teams “in the mud with the customer” instead of stuck on Zoom.Listen for these takeawaysWhy you must deeply understand your customer's mission—and often physically go to the “border” or “boat”—before pitching technology.How to build influence maps, not just chase titles on an org chart.A tested AI + email play that interns used to book meetings your team “could never get.”A simple question that turns MEDDIC metrics from guesswork into a mutual accountability pact.Connect with CraigBook: Craft: CIA Elite Selling on Amazon (hardcover, ebook, and audiobook).Bonus material & AI scripts: unlock the members section using the book, or message Craig on LinkedIn if you bought the audio version.If you're tired of canned discovery, bad qualification, and random acts of prospecting, this conversation will change how you think about mission, AI, and what “elite selling” really looks like.

Bulletproof Business Podcast
E155 - Episode 3 of 5 part series: Managing Isn't a Personality Flaw

Bulletproof Business Podcast

Play Episode Listen Later Feb 23, 2026 22:23


Why being "too involved" is a systems failure—not a leadership failure If you're still in the middle of decisions, approvals, and problem-solving, it's easy to assume you're failing as a leader. That you don't trust enough. That you can't let go. That you're somehow wired to micromanage. In this episode, we dismantle that lie. Managing isn't a personality flaw—it's feedback. It's a signal that leadership infrastructure is incomplete. And once you see that clearly, the shame lifts and the real work can begin. Managing isn't the opposite of leadership—it's what shows up when clarity, authority, and reinforcement aren't fully installed. You're not too involved because you're controlling. You're involved because decisions still route through you. Trust doesn't come first—clarity does. Autonomy isn't declared; it's built through structure, boundaries, and consistent reinforcement over time. Key Takeaways Managing is not micromanaging—managing is clarity under incomplete structure If decisions still escalate to you, it's a design issue—not a personality issue Trust is an outcome of clear systems, not a starting point Empowerment without authority and boundaries feels like risk, not ownership Gen Z isn't less capable—they're less willing to guess inside ambiguity Autonomy is a byproduct of structure, not motivation Jumping back in under pressure doesn't mean you failed—it means the system isn't strong enough yet Managing becomes permanent only when leaders stop redesigning the environment Even though the Vision Workshop has already happened, you can still access the full replay. If this episode helped you see that managing isn't your flaw—but a systems signal—the workshop walks you through how to build leadership infrastructure that removes you from the middle permanently.  Get the Vision Workshop replay here: https://aibusinessscalingblueprint.com/vision2026

Blues Music (Blues moose radio)
Episode 2152: Bluesmoose 2152-08-2026

Blues Music (Blues moose radio)

Play Episode Listen Later Feb 21, 2026 59:57


Sugar Ray & The Bluetones – Blind Date  -  Blues from Sibculo - 2025 Guy Verlinde - me and my Blues - 2026 – singleOmar Coleman Igor Prado – I am leaving my no good woman  - Old, New Funky and Blue - 2026 Duke Robillard - I'll Be Glad When You're Dead (You Rascal You) - BLAST OFF! – 2026Studebaker John & The his maxwell street kings – Well allright  - Jumping form limb to limb – 2026Paladins – Going down to big mary's -  Live in Doornroosje Nijmegen – 1999Davy Knowles - Garbage man - Live at Bluesmoose radio 07-05-2017Foolhouse Bluesband – Further up on the road  -Live im Colos-Saal - 1994 

Old Gamer‘s Almanac
Baby Steps, Cairn, and more Peak (2025 and 2026)

Old Gamer‘s Almanac

Play Episode Listen Later Feb 19, 2026 107:26


Sorry it's been a minute since you got more content that I know you crave but Hunbun got the flu so his butt was out of commission. This week I've got EJ over and we're talking about three different games that are all about climbing, climbing is the new jumping. Jumping is dead.  Come see Hunter headline Sisyphus Brewing in Minneapolis! Hunter Donaldson Comedy (Featuring Jamie Carbone) Feb. 27th and 28th To see the most up to version of The OGA 100, visit https://bckl.gg/oHm4 Music by nightcorey. https://soundcloud.com/nightcorey  Consider contributing to our show on Patreon. (https://www.patreon.com/oldgamersalmanac) Email us your thoughts on the ongoing list at oldgamersalmanac(at)gmail(dot)com. Or come talk to us on our Discord. (https://discord.gg/ASG2YpyfPx)

Galnet News Digest
19 Feb 3312: The Lukewarm War / Jumping and Diving

Galnet News Digest

Play Episode Listen Later Feb 19, 2026 2:30


The truce in the Radicoida enclave is over, but you probably wouldn't notice the difference. And the Distant Worlds 3 expedition is planning an SRV time trial that ends by diving into a crater.

RugbyPass Offload
Whistle Watch - Does rugby need to ban jumping?

RugbyPass Offload

Play Episode Listen Later Feb 19, 2026 21:54


Whistle Watch returns for another week with Nigel Owens joined by former Wales wing Alex Cuthbert to dissect all the challenging and curious refereeing incidents from the weekend of Six Nations action.Was Hollie Davidson right to penalise Ireland hooker Dan Sheehan for attempting to leap across the Italian defensive line to score a try? Did England winger Henry Arundell deserve a red card in the Calcutta Cup? And what to Nigel and Alex make of the current laws around jumping? Tune in for all that and more in the latest episode of Whistle Watch.Whistle Watch is in partnership with Emirates. Hosted on Acast. See acast.com/privacy for more information.

I am an Equestrian - Le Podcast

This episode is a teaser. The full interview will be available on Wednesday, February 25, 2026.

That Time I Got Reincarnated in the Same World as an Anime Podcaster

Get ready for the ultimate nostalgia trip, coming up next on the Fox Box! Moxie the Yeen and Isekai Sensei-Sama are sitting in the living room right in front of the console TV, ready for their weekly dose of anime. And to kick things off, it's time for Ultimate Muscle!Chat with us instantly by clicking here!Support the showCheck out our website, AnimePodcasterReincarnation.com, to leave a comment or check out our blog posts. Follow on Bluesky or Threads and subscribe on YouTube so you don't miss new episodes. You can also follow us on Facebook or Patreon, join our Discord server, or reach us by email at IsekaiSenseiSama@gmail.com.

A to Z Sports Nashville
Brian Daboll confirms his clear advantage jumping in as Titans offensive coordinator

A to Z Sports Nashville

Play Episode Listen Later Feb 19, 2026 57:10


For More Titans coverage follow us here: https://www.atozsports.com/nashville  Podcasts: https://www.atozsports.com/podcasts  Facebook: https://www.facebook.com/atozsportsnashville  Instagram: https://www.instagram.com/atozsports/  Twitter: https://twitter.com/AtoZSports  TikTok: https://www.tiktok.com/@atozsportsnashville #AtoZSports #TennesseeTitans #NFLFootball  Learn more about your ad choices. Visit megaphone.fm/adchoices

PS You Got this
"Jumping in the way back apparatus...2019 here go." (02/18/2026)

PS You Got this

Play Episode Listen Later Feb 18, 2026 11:04


Welcome to The P.S. after dark. (18+content). Are you feeling funky... free... and groovy? Stop on by.Tonight we ask the tough questions. Drop your comments, thoughts and ideas. Find us on X. See y'all on the other side.

NASTY KNUCKLES PODCAST
Episode 227 featuring Brian Boucher

NASTY KNUCKLES PODCAST

Play Episode Listen Later Feb 17, 2026 43:42


Riley Cote and Derek Settlemyre start the show talking about the Olympics, and some other projects the boys are working on. Former Philadelphia Flyers goalie and current analyst Brian Boucher joined us for an interview while he's in Italy broadcasting the Olympics! Boosh talks to us about the preliminary round, the center ice goal Jeremy Swayman let in, some players who have surprised him so far, Tom Wilson's fight, and his thoughts on MacKinnon and Celebrini's tournaments so far. Jumping into some Flyers talk we discuss all the drama around Matvei Michkov and Rick Tocchet, Dan Vladar and Sam Ersson's seasons, and Philly's playoff chances. Go to gt-wholesale.com and use coupon code "nasty" for 15% off. Nasty Knuckles is a Baller Sports Network production, created by co-hosts, Riley Cote and Derek "Nasty" Settlemyre. The show features a mix of interviews, never before heard story-telling, hockey-talk, and maybe some pranks... The guys bring in some of the biggest names in the hockey world for your enjoyment! Make sure to check back every week as the guys release a new episode weekly!►Click here to shop our latest merch: nastyknuckles.com/shop► Follow the show on Twitter: https://twitter.com/NastyKnuckles► Follow Riley Cote on Twitter: https://twitter.com/rileycote32► Follow Riley Cote on Instagram: https://instagram.com/rileycote32► Follow Derek Settlemyre on Twitter: https://twitter.com/dnastyworld► Follow Derek Settlemyre on Instagram: https://instagram.com/dnastyworld Hosted on Acast. See acast.com/privacy for more information.

A Parenting Resource for Children’s Behavior and Mental Health
Device Dysregulation™: The Surprising Way Screens Rewire Your Child's Brain | Emotional Dysregulation | E382

A Parenting Resource for Children’s Behavior and Mental Health

Play Episode Listen Later Feb 16, 2026 23:45


Device Dysregulation™ can leave children overstimulated, anxious, and struggling to calm their brains after screen use. In this episode, Dr. Roseann Capanna-Hodge, expert in Regulation First Parenting™, explains how screens impact emotional regulation and shares strategies to help kids reset and thrive.Parenting with constant screens can feel overwhelming. You're not alone. Post-pandemic, many kids became overstimulated from online learning and social media, leaving parents unsure how to help.Device dysregulation isn't just screen time—it's a brain stuck in high alert, craving dopamine, and losing tolerance for calm.In this episode, you'll learn why kids get stuck in device dysregulation, how to prevent emotional dysregulation, and concrete strategies for transitions, boundaries, and sensory resets that make real change possible.Why does my child meltdown when I ask them to put the device down?Meltdowns aren't defiance—they're the nervous system signaling overwhelm. Rapid-fire entertainment, dopamine spikes, and addictive social media can keep the brain in a constant high alert, often leading to emotion regulation difficultiesand maladaptive emotion regulation strategies.These challenges affect children's emotional responses, increase negative emotions, and in some cases can mimic symptoms seen in mental disorders or contribute to problematic internet use.Tips for parents:Co-regulate first: Model calm so your child can borrow your regulation and practice healthier emotion regulation strategies.Avoid personalization: Their reactions aren't about you—they're dysregulated.Predictable boundaries: Set device limits before the screen is on to reduce conflict and support consistent, regulated emotional responses.Real-Life Example: Eli, a 12-year-old, became irritable and anxious post-pandemic. Consistent screen limits and calm parental cues helped him power down without daily battles.How can I help my child regulate after excessive screen time?Transitions from screens are tricky because the brain is overstimulated. Without grounding, kids and young adults can struggle with emotional awareness, executive functioning, and attention, increasing the risk of temper tantrums, negative emotional states, and experiencing negative emotions.Practical strategies:Sensory transitions: Jumping jacks, cold water, a sensory snack, or barefoot walks reset the nervous system.Model coping: Show how you unplug and shift focus calmly.Gradual transitions: Use timers and warnings for device cutoff to reduce experiencing negative emotions and prevent meltdowns.If you're tired of walking on eggshells or feeling like nothing works…Get the FREE Regulation Rescue Kit and finally learn what to say and do in the heat of the moment.Become a Dysregulation Insider VIP at

The Simple Man Podcast
Craig Jones vs Dillon Danis, Damien Anderson Next Stop UFC, Analyzing Jiu Jitsu | The Simple Man Podcast Ep. 158

The Simple Man Podcast

Play Episode Listen Later Feb 16, 2026 102:16


Don't forget to Like & Subscribe to GET SIMPLIFIED!Join Nicky Rod, Nicky Ryan, and Damien Anderson in the Simple Man studio for a mini Q&A, fight predictions, and more.InstagramThe Podcast: @thesimplemanpodcast Come Train with Us: @simplemanmartialartsHosts:@bjjdamien@nickyrod247@ethan.crelinsten@nickyryanbjjProducer:@allywolskiC4 :@c4energyhttps://glnk.io/44o9/bjjdamienCode: SIMPLEMAN for 15% off your order!Marek Health:

Firearms Radio Network (All Shows)
Misfits Media 103 – Jumping Bullets

Firearms Radio Network (All Shows)

Play Episode Listen Later Feb 15, 2026


Thank you for joining us on the Misfits Media Podcast. This episode is packed full of topics including new product announcements, beginner shooter advice, and as always, there's listener feedback, What Would You Do, Guess the Gun, Gay or Gray and Fully-Semi-Automatic. So load, make ready and join in on the fun. Sponsors: Title Sponsor: A&J Sporting https://aandjsporting.com/ Use code ‘MM10' for 10% off qualifying purchases   Travis's Garage Woodworking and Laser Engraving Email: tgwoodandlaser@gmail.com Facebook: https://www.facebook.com/profile.php?id=61557696364367&sk=about Instagram: https://www.instagram.com/tgwoodlaser/   Red Mist Tripods Website: https://www.red-mist-tripods.com/ Facebook: https://www.facebook.com/people/RED-MIST-Tripods/61556579574054/ Instagram: https://www.instagram.com/red_mist_tripods/   Misfits Media Podcast Email: misfitsmediagroup@gmail.com Patreon: https://patreon.com/MisfitsMediaPodcast YouTube: https://www.youtube.com/@Misfits_Media_Podcast FB: https://www.facebook.com/people/Misfits-Media-Podcast/61559504157666/ IG: https://www.instagram.com/misfits_media_podcast/ Firearms Radio Network: https://firearmsradio.net/category/podcasts/misfits-media/ Amazon Music: https://music.amazon.com/podcasts/2790080b-8a79-4db7-b3af-3aecf427ef1b/misfits-media Podbean: https://www.podbean.com/pw/dir-ihfas-1fffbb   Full Circle Reloading & Firearms: Website: https://fullcirclereloading.com/home YouTube: https://www.youtube.com/@FullCircleReloading Products / Companies / Show Mentions: Gideon Optics ‘Pebble': https://gideonoptics.com/shop-all/pebble-reflex-sight/ Timney Triggers: https://timneytriggers.com/ Redacted Images: https://www.instagram.com/redacted.images/ , https://www.facebook.com/redactedimage/   2nd Amendment Organizations: Gun Owners of America: https://www.gunowners.org/ Firearms Policy Coalition: https://www.firearmspolicy.org/ Second Amendment Foundation: https://saf.org/

Arroe Collins Like It's Live
Bone Chilling Family Of Spies From Christine Kuehn And Mark Schiponi

Arroe Collins Like It's Live

Play Episode Listen Later Feb 15, 2026 10:42 Transcription Available


A World War II Story of Nazi Espionage, Betrayal, and the Secret History Behind Pearl Harbor It began with a letter from a screenwriter, asking about a story. Your family. World War II. Nazi spies. Christine Kuehn was shocked and confused. When she asked her seventy-year-old father, Eberhard, what this could possibly be about, he stalled, deflected, demurred, and then wept. He knew this day would come. The Kuehns, a prominent Berlin family, saw the rise of the Nazis as a way out of the hard times that had befallen them. When the daughter of the family, Eberhard's sister, Ruth, met Nazi leader Joseph Goebbels at a party, the two hit it off, and they had an affair. But Ruth had a secret—she was half Jewish—and Goebbels found out. Rather than having Ruth killed, Goebbels instead sent the entire Kuehn family to Hawaii, to work as spies half a world away. There, Ruth and her parents established an intricate spy operation from their home, just a few miles down the road from Pearl Harbor, shielding Eberhard from the truth. They passed secrets to the Japanese, leading to the devastating attack on Pearl Harbor. After Eberhard's father was arrested and tried for his involvement in planning the assault, Eberhard learned the harsh truth about his family and faced a decision that would change the path of the Kuehn family forever. Jumping back and forth between Christine discovering her family's secret and the untold past of the spies in Germany, Japan, and Hawaii, Family of Spies is fast-paced history at its finest and will rewrite the narrative of December 7, 1941.Become a supporter of this podcast: https://www.spreaker.com/podcast/arroe-collins-like-it-s-live--4113802/support.

Precision Rifle Network
Episode 103 : Jumping Bullets

Precision Rifle Network

Play Episode Listen Later Feb 15, 2026 141:19


Thank you for joining us on the MisfitsMedia Podcast. This episode is packed full of topics including new productannouncements, beginner shooter advice, and as always, there's listenerfeedback, What Would You Do, Guess the Gun, Gay or Gray andFully-Semi-Automatic. So load, make ready and join in on the fun. Sponsors:Title Sponsor: A&J Sporting https://aandjsporting.com/Use code ‘MM10' for 10% off qualifyingpurchases Travis's Garage Woodworking and LaserEngravingEmail: tgwoodandlaser@gmail.comFacebook: https://www.facebook.com/profile.php?id=61557696364367&sk=aboutInstagram: https://www.instagram.com/tgwoodlaser/ Red Mist TripodsWebsite: https://www.red-mist-tripods.com/Facebook: https://www.facebook.com/people/RED-MIST-Tripods/61556579574054/Instagram: https://www.instagram.com/red_mist_tripods/ MisfitsMedia PodcastEmail: misfitsmediagroup@gmail.comPatreon: https://patreon.com/MisfitsMediaPodcastYouTube: https://www.youtube.com/@Misfits_Media_PodcastFB: https://www.facebook.com/people/Misfits-Media-Podcast/61559504157666/IG: https://www.instagram.com/misfits_media_podcast/Firearms Radio Network: https://firearmsradio.net/category/podcasts/misfits-media/Amazon Music: https://music.amazon.com/podcasts/2790080b-8a79-4db7-b3af-3aecf427ef1b/misfits-mediaPodbean: https://www.podbean.com/pw/dir-ihfas-1fffbb FullCircle Reloading & Firearms:Website: https://fullcirclereloading.com/homeYouTube: https://www.youtube.com/@FullCircleReloadingProducts / Companies / Show Mentions:Gideon Optics ‘Pebble': https://gideonoptics.com/shop-all/pebble-reflex-sight/Timney Triggers: https://timneytriggers.com/Redacted Images: https://www.instagram.com/redacted.images/, https://www.facebook.com/redactedimage/ 2nd Amendment Organizations:Gun Owners of America: https://www.gunowners.org/Firearms Policy Coalition: https://www.firearmspolicy.org/Second Amendment Foundation: https://saf.org/

Slacker & Steve
Full show - FrYiday | T. Hack's controversial portable meat rankings | Jumping off a cliff | Erica wants Slacker to get a spray tan | What's the most offensive curse word? | What makes a salad a salad? | Screen divorce | What would you pay someone to do

Slacker & Steve

Play Episode Listen Later Feb 14, 2026 78:21


Full show - FrYiday | T. Hack's controversial portable meat rankings | Jumping off a cliff | Erica wants Slacker to get a spray tan | What's the most offensive curse word? | What makes a salad a salad? | Screen divorce | What would you pay someone to do for you? | Stupid injury | Stupid stories www.instagram.com/theslackershow www.instagram.com/ericasheaaa www.instagram.com/thackiswack www.instagram.com/radioerin

KNBR Podcast
Tony Vitello on Jumping to MLB, Giants Staff Roles & Bryce Eldridge's OF Reps

KNBR Podcast

Play Episode Listen Later Feb 13, 2026 21:11


Giants manager, Tony Vitello joins the show to explain how he’s adapting from college baseball to leading a Major League clubhouse. He breaks down the importance of his coaching staff and reveals who pushed for Bryce Eldridge to get reps in the outfield.See omnystudio.com/listener for privacy information.

Murph & Mac Podcast
Tony Vitello on Jumping to MLB, Giants Staff Roles & Bryce Eldridge's OF Reps

Murph & Mac Podcast

Play Episode Listen Later Feb 13, 2026 21:11


Giants manager, Tony Vitello joins the show to explain how he’s adapting from college baseball to leading a Major League clubhouse. He breaks down the importance of his coaching staff and reveals who pushed for Bryce Eldridge to get reps in the outfield.See omnystudio.com/listener for privacy information.

Rena Malik, MD Podcast
How to Build Resilient Bones and Joints for Lifelong Strength with Dr. Vonda Wright

Rena Malik, MD Podcast

Play Episode Listen Later Feb 13, 2026 88:26


In this episode, Dr. Rena Malik, MD is joined by orthopedic surgeon Dr. Vonda Wright to explore the essentials of musculoskeletal health and longevity. They discuss the surprising role of bones as endocrine organs, practical steps to optimize bone and joint health through lifestyle, exercise, and hormones, and strategies to prevent debilitating fractures as we age. With actionable insights and evidence-based recommendations, listeners will gain the tools to maintain strength, mobility, and independence throughout life. Become a Member to Receive Exclusive Content: renamalik.supercast.com Schedule an appointment with me: https://www.renamalikmd.com/appointments ▶️Chapters: 00:00:00 Introduction 00:05:10 What Makes Bones Healthy 00:11:05 Hormones & Bone Loss 00:15:49 Fractures & Real Risks 00:19:46 Jumping, Lifting & Impact 00:26:54 Training Mistakes & Recovery 00:34:31 Strength, Mobility & Longevity 00:41:17 Young People & Bone Health 00:53:14 Joints, Arthritis & Running 01:00:18 PRP & Modern Treatments 01:12:47 Hips, Pelvic Floor & Function 01:19:49 Future of Orthopedics 01:24:00 Closing Questions & Takeaways Stay connected with Dr. Vonda Wright on social media for daily insights and updates. Don't miss out—follow her now and check out these links! INSTAGRAM - https://www.instagram.com/drvondawright/?hl=en FACEBOOK - https://www.facebook.com/DrVonda/ YOUTUBE - https://www.youtube.com/user/vondawright X - https://twitter.com/drvondawright WEBSITE - https://www.drvondawright.com/ Unbreakable: A Woman's Guide to Aging with Power By Vonda Wright, MD - https://www.penguinrandomhouse.com/books/777365/unbreakable-by-vonda-wright-md/ Let's Connect!: WEBSITE: http://www.renamalikmd.com YOUTUBE: https://www.youtube.com/@RenaMalikMD INSTAGRAM: http://www.instagram.com/RenaMalikMD TWITTER: http://twitter.com/RenaMalikMD FACEBOOK: https://www.facebook.com/RenaMalikMD/ LINKEDIN: https://www.linkedin.com/in/renadmalik PINTEREST: https://www.pinterest.com/renamalikmd/ TIKTOK: https://www.tiktok.com/RenaMalikMD ------------------------------------------------------ DISCLAIMER: This podcast is purely educational and does not constitute medical advice. The content of this podcast is my personal opinion, and not that of my employer(s). Use of this information is at your own risk. Rena Malik, M.D. will not assume any liability for any direct or indirect losses or damages that may result from the use of information contained in this podcast including but not limited to economic loss, injury, illness or death. Learn more about your ad choices. Visit megaphone.fm/adchoices

TV CONFIDENTIAL: A radio talk show about television
Why Jennifer Jones is The Jackie Robinson of The Rockettes

TV CONFIDENTIAL: A radio talk show about television

Play Episode Listen Later Feb 13, 2026 8:32


Please enjoy this special preview of our upcoming conversation with Jennifer Jones, the first African-American member of the world renowned Radio City Rockettes, and an award-winning performer who is celebrated for her pioneering achievements and unwavering advocacy for equal rights in the arts. Jennifer's memoir, Becoming Spectacular: The Rhythm of Resilience from The First African-American Rockette, not only tells the story of how she helped establish a transformative era for The Rockettes while inspiring other Black dancers, but also recounts her triumphant battle against colorectal cancer in 2018. February is Black History Month. March is both Women's History Month and Colorectal Cancer Awareness Month. Becoming Spectacular is available wherever books are sold through Amistad Books, an imprint of HarperCollins. Our complete conversation with Jennifer Jones will air during the weekend of Feb. 27 on TV Confidential.  For our listeners in the Greater L.A. Metro area, Jennifer Jones' story is also included in This Joint is Jumping, a new exhibit at The Hollywood Museum that honors the contributions of many notable Black artists, singers, actors, writers and sports figures, including Whitney Houston, Lena Horne, Denzel Washington, Ella Fitzgerald, The Pointer Sisters, Dionne Warwick, Forrest Whitaker, Wesley Snipes, Eddie Murphy, Richard Pryor, Angela Bassett, Muhammad Ali, Will Smith, Halle Berry, Viola Davis, Diana Ross, and Oprah Winfrey. This Joint is Jumping becomes open to the public on Friday, Feb. 19. For tickets and more information: TheHollywoodMuseum.com

Inside the Tunnel: A Virginia Tech Sports Podcast
Kolby's Corner: Virginia Tech Mailbag, Tyrell Smith's New Job & Team Scoop

Inside the Tunnel: A Virginia Tech Sports Podcast

Play Episode Listen Later Feb 12, 2026 24:15


Jumping into a full Virginia Tech mailbag tonight. We'll discuss Tyrell Smith's new role, spring practice start dates, and some early team buzz coming out of the building. I'll also hit your questions live and share a little behind-the-scenes scoop as we head toward spring ball. Drop questions in the chat and let's get into it. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Bussin' With The Boys
Valentine's Day Special + Do the boys REALLY know their Wifeys?? | For The Dads

Bussin' With The Boys

Play Episode Listen Later Feb 11, 2026 156:47 Transcription Available


In this episode of For The Dads with Former NFL Linebacker Will Compton, hosts Will and Sherm discuss their plans for Valentine’s Day, Will talks about finally being back with the fam after the Super Bowl, and the boys play a little newly wed game — all while keeping the episode fun, fresh and of course, under an hour. The episode kicks off with Will finally getting his power back (!) before they dive into some hilarious conversations, including: How much do they know their wifeys? A TON of PT6 stories to tell Demon joins the show! Other highlights include: Sherm shares a combined Dad Hack / Dad Loss Will shares a sweet story about Rue

The Final Furlong Podcast
Cheltenham Novice Chasers Special: Paul Fergusons Arkle & Brown Advisory Betting Guide with 10/1 Bet

The Final Furlong Podcast

Play Episode Listen Later Feb 11, 2026 90:35


Emmet Kennedy is joined by Paul Ferguson, author of the Weatherbys Cheltenham Festival Betting Guide, for a full assessment of the Grade 1 novice chase division. We start with the Arkle Novices' Chase (2m) – a race built on speed, precision and temperament.

The Crazy Ex-Wives Divorce Club
Why You Keep Repeating the Same Relationship (And How to Stop)

The Crazy Ex-Wives Divorce Club

Play Episode Listen Later Feb 11, 2026 34:15


If you feel like you keep ending up in the same relationship with a different person, this episode explains why.In the Season 12 premiere of The Crazy Ex-Wives Club, Erica breaks down the real reason relationship patterns repeat after divorce. Not because you're broken, unlucky, or choosing the “wrong” people, but because unhealed wounds, nervous system responses, and unconscious expectations are still running the show.This episode explores the space between rushing back into dating and avoiding it altogether. Erica walks through the three core lessons that determine whether you're actually ready for a new relationship. She explains how partners become emotional stand-ins, why asking someone else to regulate your happiness creates resentment, and how to tell the difference between a “me problem” and a “we problem.”You'll also hear why even the right person can trigger you, how old wounds from betrayal and infidelity resurface in new relationships, and why triggers are information, not proof that you're failing at healing. You'll learn:Why repeating relationship patterns after divorce is common and preventableHow to tell the difference between a personal trigger and a real relationship issueWhat “jumping through hoops” looks like and why it destroys connectionHow divorce rewires your nervous system and impacts dating readinessWhy asking a partner to make you happy creates resentmentHow unhealed wounds from betrayal show up in new relationshipsWhy triggers are data, not red flagsHow to stop outsourcing emotional regulation to a partnerWhat it means to enter a relationship whole instead of looking to be completedHow divorce can become a blueprint for healthier relationships moving forwardWe talk about:00:00 Wondering if you're ready to date again02:00 Why people rush back into dating or avoid it completely04:00 Divorce as a nervous system reset06:00 “Me problem vs we problem” in relationships08:00 How relationships mirror unhealed wounds10:00 Why expecting a partner to complete you creates pressure12:00 Jumping through emotional hoops and resentment14:00 Self-imposed expectations and burnout16:00 Cleaning up your side of the street18:00 The stories your mind creates when triggered20:00 Infidelity wounds and anxiety in new relationships22:00 Communicating triggers instead of assuming meaning24:00 Why even good partners will trigger you26:00 Using triggers as information, not danger28:00 Recognizing repeating conflict patterns30:00 Choosing new responses instead of old reactions32:00 Why divorce gives you tools to never let it get that bad againLinks Mentioned in the ShowLooking for support on your journey? Join THE CLUBReady to Define the New You? Create your BLUEPRINTContact Erica & The Crazy Ex-Wives Clubwww.thecrazyexwivesclub.com Tag us @ Instagram | Facebook | TikTokDid you love this episode? Make sure to follow for more.

The Corelink Solution with James Rosseau, Sr.
207. Wingy Danejah: When Recognition Tests Faith

The Corelink Solution with James Rosseau, Sr.

Play Episode Listen Later Feb 10, 2026 49:57


Jumping over many hurdles on his path to finding Christ, Wingy Danejah shares how he embraced faith with his challenges. He reflects on his early life in Jamaica, the influence of his strict upbringing, and how he discovered his passion for music through poetry and performance. Wingy discusses his experiences touring with renowned artists like Beanie Man and Sean Paul, and the pivotal moment when he decided to dedicate his life to God, leading to a profound shift in his music and purpose. Wingy emphasizes the importance of unity, love, and transparency in both his personal life and music. He candidly addresses the struggles of transitioning from secular to gospel music, the challenges of judgment within the church, and the need for artists to support one another. Wingy also shares insights on managing public perception, the impact of his music on listeners, and his commitment to using his platform for good. Looking ahead into the future, Wingy expresses his desire to create music that reflects God's love and to give back to the community, emphasizing that his journey is more than just personal success, it's about uplifting others and spreading hope.

McElroy and Cubelic in the Morning
2-10-26 McElroy & Cubelic in the Morning Hour 3: What FCS teams jumping to FBS means moving forward; is Greg a biased analyst; Sam Herder talks CFB

McElroy and Cubelic in the Morning

Play Episode Listen Later Feb 10, 2026 47:51 Transcription Available


Tuesday's 9am hour of Mac & Cube kept on with Sam Herder, from HERO Sports, tells McElroy & Cubelic why North Dakota State made the jump to FBS, how this invigorated fans & donors, and where the ceiling of the team is now that they've moved to a bigger level of sports; then, the guys find out that Greg has apparently has been so biased for the SEC while broadcasting games; and finally, a few Bad Box Scores of the Day close out the Tuesday show. "McElroy & Cubelic In The Morning" airs 7am-10am weekdays on WJOX-94.5!See omnystudio.com/listener for privacy information.

OCD RECOVERY

This podcast shows you how to fully recover from OCD.Each episode breaks down the exact techniques and nuances that stop rumination, reduce compulsions, and help you retrain your brain out of the OCD cycle. We cover every major OCD theme, including:Pure-O OCDRelationship OCDHarm OCDReal Event OCDSO-OCD / Sexuality OCDReligious / Scrupulosity OCDCleaning & Contamination OCDPhysical CompulsionsAll other OCD subtypesMy goal is simple: clear guidance that actually works, explained in a way that is calm, direct, and easy to apply immediately.You can fully recover from OCD. Don't give up — you're not stuck, and your brain can change.

Linchpin Conversations
Major Announcement!

Linchpin Conversations

Play Episode Listen Later Feb 9, 2026 57:48


VNR cycles for free! How to scale dumbbell workouts. Advice I would change. Working out when sick. Modifying box jumps. Don't quit. How to test your limits. Fitness & pregnancy. Jumping rope for conditioning. Linchpin complexes. Working around an injury. Doing lunges & carries in a small space. Weight belts & wrist wraps. Prioritizing lifts & skills

The Mockingpulpit
"Jumping Backward into Grace" - John Newton

The Mockingpulpit

Play Episode Listen Later Feb 9, 2026 18:33


Check out St. Michael's Episcopal Church, Austin, TX, where John serves as Rector.

OCD RECOVERY

This podcast shows you how to fully recover from OCD.Each episode breaks down the exact techniques and nuances that stop rumination, reduce compulsions, and help you retrain your brain out of the OCD cycle. We cover every major OCD theme, including:Pure-O OCDRelationship OCDHarm OCDReal Event OCDSO-OCD / Sexuality OCDReligious / Scrupulosity OCDCleaning & Contamination OCDPhysical CompulsionsAll other OCD subtypesMy goal is simple: clear guidance that actually works, explained in a way that is calm, direct, and easy to apply immediately.You can fully recover from OCD. Don't give up — you're not stuck, and your brain can change.

Straight Up Chicago Investor
Episode 430: From Flipping Neighborhood Warehouses to Multifamily Rentals: Rapid REI Growth with Reed Meyer

Straight Up Chicago Investor

Play Episode Listen Later Feb 5, 2026 50:31


Reed Meyer, founder and managing broker at Lease312, shares top tier leasing advice and how connecting with mentors has propelled his real estate journey's growth at such a young age! Reed starts off by detailing the process of acquiring his first property; a 3-unit house hack in Logan Square. He talks about purchasing an industrial warehouse and flipping it for double the price in just 45 days! Reed shares golden insights on apartment listing marketing, leasing, and tenant expectations in prime North Side neighborhoods. To conclude, Reed gives a 5 year outlook on his business goals and Chicago's residential rental market! If you enjoy today's episode, please leave us a review and share with someone who may also find value in this content! ============= Connect with Mark and Tom: StraightUpChicagoInvestor.com Email the Show: StraightUpChicagoInvestor@gmail.com Properties for Sale on the North Side?  We want to buy them. Email: StraightUpChicagoInvestor@gmail.com Have a vacancy? We can place your next tenant and give you back 30-40 hours of your time. Learn more: GCRealtyInc.com/tenant-placement Has Property Mgmt become an opportunity cost for you? Let us lower your risk and give you your time back to grow. Learn more: GCRealtyinc.com ============= Guest: Reed Meyer, Lease312 Link: Reed's LinkedIn Link: SUCI Ep 421 - Alec Greenberg Link: SUCI Ep 259 - Sal Becovic Link: SUCI Ep 22 - Matt Fritzshall Link: SUCI Ep 51 - John Westbrook Link: Gabe Horstick (Network Referral) Link: Zeckendorf Autobiography Guest Questions:  01:56 Housing Provider Tip - Get great pictures, get floor plans, and be responsive to optimize leasing of apartments! 03:15 Intro to our guest, Reed Meyer! 06:45 Reed's first house hack. 11:37 The nuances of industrial property investing. 16:40 Reed's transition from corporate to real estate full time! 22:12 Top leasing mistakes by landlords. 29:15 Jumping into a 5-unit house hack! 34:00 Tenant expectations in Lakeview. 40:50 Outlook on the Chicago leasing market and rent growth!  42:52 Reed's 5 year business outlook. 45:43 What is your competitive advantage? 45:53 One piece of advice for new investors. 46:10 What do you do for fun? 46:54 Good book, podcast, or self development activity that you would recommend?  47:26 Local Network Recommendation?  48:08 How can the listeners learn more about you and provide value to you? ----------------- Production House: Flint Stone Media Copyright of Straight Up Chicago Investor 2026.

Morning Joe
Mika: Have Noem, Miller, Bessent been held accountable for jumping to ICE's defense for no reason?

Morning Joe

Play Episode Listen Later Jan 28, 2026 46:36


Mika: Have Noem, Miller, Bessent been held accountable for jumping to ICE's defense for no reason? To listen to this show and other MS podcasts without ads, sign up for MS NOW Premium on Apple Podcasts. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.