Podcasts about Attached

8th episode of the seventh season of ''Star Trek: The Next Generation''

  • 2,602PODCASTS
  • 4,092EPISODES
  • 39mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 27, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Attached

Show all podcasts related to attached

Latest podcast episodes about Attached

Real Ghost Stories Online
Was Charlie Attached to the House or Attached to Her? | Real Ghost Stories

Real Ghost Stories Online

Play Episode Listen Later Feb 27, 2026 35:13


For years, Cosette had heard the stories about “Charlie” — the entity her friend's mom claimed haunted the house she grew up in. Chains in the basement. Footsteps on the stairs. A presence that never quite left.Cosette always assumed those stories were tied to that old, abandoned house.Then one afternoon, while hanging out in her friend's bedroom, she glanced toward the open closet and saw a tall, slouched shadow standing there. It didn't move. It didn't approach. It simply watched.The air felt thick. Aware. Trying to stay calm, she texted her friend's mom and described exactly what she was seeing. The response she got wasn't disbelief — it was recognition.And one small detail in that reply made her realize something unsettling: Maybe Charlie had never been tied to the house at all.

Meditation x Attachment with George Haas
The Attached Society: How Collective Insecurity Fuels Polarization

Meditation x Attachment with George Haas

Play Episode Listen Later Feb 27, 2026 54:59


Exploring how anxious, avoidant, and disorganized attachment dynamics play out on a cultural scale, from tribalism and misinformation to the craving for certainty.Join us in Meditation x Attachment Level One to get to the root issues, and craft a life that feels resourced, fulfilling and balanced. Know yourself more fully. Develop skills to stay emotionally balanced and regulated (through the highs and lows of life). Have a partnership, friendships and relationships that feel nourishing (not draining).This is your time to live a meaningful life. We'll give you the blueprint, which lives at the intersection of meditation and attachment theory. Our next cohort of Level One runs March 7, 14 , 21. Secure your spot today: https://www.mettagroup.org/meditation-x-attachment-level-one

Be The Husband She Brags About
7: How an Anxious Attached Partner Can Turn Their Hypervigilance Into a Relationship Gift

Be The Husband She Brags About

Play Episode Listen Later Feb 27, 2026 53:16


How is hypervigilance impacting your marriage? Is your anxiety creating constant tension between you and your partner? And if you are the partner of someone experiencing hypervigilance, are you at the end of your tether trying to support them and coming up short? The truth is, most relatioships treat hypervigilance as a proble to fix, which in turn creates more disconnection and conflict. The good news is... There's a different way to view this pattern, a way that will support both partners in feeling safe and connected within the relationship. In this episode we share 3 key steps to turn the pattern of hypervigilance from a problem to fix into a gift for the relationship. This episode is for every anxious attached who has felt broken and not seen, not only within the marriage, but also within support containers such as therapy. You are not alone in this and you are not broken.  And this episode is also for every partner of someone experiencing hypervigilance, who feels at a loss as to how to support their spouse. These 3 steps will  make all the difference.  Share this episode with your partner and unlock even better results together!If you enjoyed this episode subscribe and leave a review. It truly helps us reaching more listeners that, just like you, want to unlock the full potential in long-term relationship.Chapters:00:00 — Understanding Hypervigilance in Relationships02:59 — The Problem vs. Puzzle Perspective05:58 — The Impact of External Validation08:56 — The Journey from Conflict to Connection11:54 — Reframing the Relationship Dynamic14:46 — The Hero's Journey in Marriage17:54 — Embracing Humility and Perspective21:08 — Finding Common Ground in Truths26:02 — Navigating Emotional Landscapes: Masculine and Feminine Dynamics32:15 — The Power of External and Internal Curiosity36:51 — Understanding Hypervigilance: A Gift of Observation47:08 — Transforming Trauma into Trust and ConnectionRelated Episode: Ep3: The #1 Thing That Will Have Your Partner Love Your Feedback YouTube Track 1253823– Monetization ID: 9HWIVQATIQUJECP3.

RELIGIOUS LIBERTY REPORT
202 - AMERICAN REVIVAL - IRAN STRIKE LOOMS - SPECIAL PLANS FOR SALVATION - THE VIRTUOUS PAGAN - PREADAMITE PEOPLE

RELIGIOUS LIBERTY REPORT

Play Episode Listen Later Feb 27, 2026 29:02


Dear RLR Listeners,Attached is RLR 202 where I discuss:AN AMERICAN REVIVALIRAN STRIKE LOOMSSPECIAL PLANS FOR SALVATIONTHE VIRTUOUS PAGANPREADAMITE PEOPLEThank you for your valuable support. God Bless You.Alexander Alfano+(1) 305 450 8550 aalfano@lawalfano.com

Date with Cents
How To Savagely Vet A Man In 30 Days (Before You Get Attached)

Date with Cents

Play Episode Listen Later Feb 26, 2026 48:04 Transcription Available


Send episode requests hereYou've convinced yourself it's bad luck, that there are no good men in your city, that maybe you're just not the kind of woman men commit to. But you don't have bad luck—you have untrained discernment.In this episode, I'm breaking down the exact framework to savagely vet a man in 30 days before you get attached. You'll discover the four things to evaluate instead of your feelings , why smart women still end up with men who fail all four, and the next layer most people miss—the 3 I's that separate a man who behaves well from a man who's actually right for you.Ready to stop wasting six months on men who showed you everything in week two?The doors are open to my signature program, Curved 2 Cuffed.ENROLL HERE BEFORE MARCH 1STInside C2C, I help you build a rotation of 2-3 commitment ready men in 90 days…who will pursue you for marriage.The program includes 12 months of access to curriculum, weekly coaching calls, weekly workshops, daily dating support, on-demand conversation and profile reviews, and more.​The investment is $3,000 one time or 6 payments of $550. Get your coins ready because we've got work to do.ENROLL HERE BEFORE MARCH 1STFollow me on Instagram for more dating gems at: @torahcents @curved2cuffed 

The MFCEO Project
1003. Q&AF: Attached To Old Identities, Potential Vs. Success & Struggles Of A Growing Business

The MFCEO Project

Play Episode Listen Later Feb 23, 2026 47:20


On today's episode, Andy answers your questions on how to become a better version of yourself by dropping old identities, how to unlock your potential without fear holding you back, and what are the best ways to handle the struggles of a growing business.

Audio Dharma
Happy Hour: Infusing the Body with Kind Awareness with No Strings of Expectation Attached

Audio Dharma

Play Episode Listen Later Feb 23, 2026 48:04


This talk was given by Nikki Mirghafori on 2026.02.23 at the Insight Meditation Center in Redwood City, CA. ******* For more talks like this, visit AudioDharma.org ******* If you have enjoyed this talk, please consider supporting AudioDharma with a donation at https://www.audiodharma.org/donate/. ******* This talk is licensed by a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License

Homilies and more By Fr. Sean Wilson
Less Attached to Earth, More Attached to Him

Homilies and more By Fr. Sean Wilson

Play Episode Listen Later Feb 23, 2026 7:02


Homily for Ash Wednesday - February 18, 2026

those F%#KING fangirls
151 | WHERE WOULD THE SMUT GO: if these 2010's YA CLASSICS were Adult Romance with CHRISTINA LAUREN

those F%#KING fangirls

Play Episode Listen Later Feb 20, 2026 54:23


Christine Riccio & Natasha Polis talk all things nerdy in the book, tv, movie, pop culture, fandoms, and how they integrate into their adult lives. Today they're LIVE FROM LOVE LIT CON with special guests, internationally bestselling romcom duo Christina Lauren — and together they're chatting about all our favorite YA classics from the 2010s - if they were adult romance: where the smut would go! Today in Fangirl Tea Time: Join Christine and Natasha for more stories about their recent life escapades. Support the pod by joining the Forking Fangirls Patreon community: http://patreon.com/thoseforkingfangirls TEAM EDWARD: The first four Heated Rivalry episode commentaries are up now! MAIN DISCUSSION STARTS AT:   17:12 Follow the visual show on our Youtube: http://youtube.com/@thoseforkingfangirls  Get Christine's new book THIRTY, FLIRTY, & FOREVER ALONE: https://www.amazon.com/dp/1662532156Add Thirty Flirty & Forever Alone on Goodreads:https://www.goodreads.com/book/show/230393104-thirty-flirty-and-forever-aloneCheck out Natasha's sewing classes: https://www.natashapolis.com/Join our patron to get 10 dollars off the classes!Website: https://thoseforkingfangirls.com/ Email us feedback: thoseforkingfangirls@gmail.com Instagram: https://www.instagram.com/thoseforkingfangirls/Twitter: https://twitter.com/forkfangirlspod TikTok: https://www.tiktok.com/@thoseforkingfangirlsGet Christine's novel Attached at the Hip: https://a.co/d/grmPeVy Check out the Selkie Collection and get 10% off your order with code TASHAPOLIS https://selkiecollection.com/collections/all

This Awkward Life
“What You're Attached To Matters”

This Awkward Life

Play Episode Listen Later Feb 19, 2026 4:26


“Sometimes spiritual growth isn't about becoming something new—it's about changing what's holding you together.”In this episode, I talk about how a cheap watch bracelet was ruining a perfectly good watch.

Warehouse Safety Tips
S6 Ep313: Tool and Machine Hazards | Warehouse Safety Tips | Episode 313

Warehouse Safety Tips

Play Episode Listen Later Feb 18, 2026 5:02


https://jo.my/pde2pqTool and Machine HazardsHand safety is one of those things people assume they've “got.” Until a quick job turns into a bandage, a pinch, or a scary near-miss with moving parts. Week 3 focuses on tool and machine hazards. Cuts, pinches, and caught-in hazards don't always come from big mistakes. They come from small shortcuts. A dull blade. A missing guard. A jam you “just want to clear real quick.”Think about how often your hands are at risk. Box cutters. Strapping tools. Conveyor points. Pallet jacks. Dock plates. Even a simple drill can bite when it binds. Hands heal slowly, and grip strength matters at work and at home. So let's keep your fingers where they belong. Attached. Working. Pain-free.Quick ways to prevent cuts, pinches, and caught-in injuriesHere are a few tips to assist you with hand safety around tools and machines:Use the tool as intended.No screwdriver as a chisel. No knife as a pry bar. Tools slip when they're doing the wrong job. That's when the blade finds your hand instead of the box.Keep tools in good shape, or tag them out.Dull blades take more force. Loose handles twist. Worn grips slide. If it's damaged, don't “make it work.” Swap it out. Report it. Simple fix. Big payoff.Keep hands out of pinch points and moving parts.If it rolls, spins, pulls, or cycles, it can grab you. Use push sticks, clamps, or the right handling points. If you can see a gap closing, don't test it with your fingers.Lockout/tagout before clearing a jam or servicing equipment.“Off” isn't the same as “safe.” Stored energy, gravity, or an auto-start can bring a machine back to life. Take the extra minute. Control the energy. That's not a suggestion. That's a safety rule.Use guards and barriers every time. Don't bypass them.Guards are there because someone would have been hurt without them. If a guard doesn't fit right or slows down the job, call it out. Fix the root issue. Don't remove the protection.As always, these are potential tips. Please follow the rules and regulations of your specific facility.Make hand safety part of how the job feels.A solid safety culture means we notice the little things before they bite. You can often feel a hazard coming. The tool doesn't sit right. The machine sounds off. The jam keeps happening. Listen to that.Take a quick pause before you reach in. Ask yourself, “If this moves right now, where does my hand go?” Build that habit, and it becomes automatic. If you see someone about to make a risky reach, speak up. A quick callout can save weeks of recovery.Thank you for joining another episode of Warehouse Safety Tips.Until we meet next time - have a great week, and STAY SAFE!#Safety #SafetyCulture #StaySafe #SafetyFirst #StayAlert #HandSafety #CaughtInHazards #PinchPointSafety #CutPrevention #ToolSafety #MachineGuarding #LockoutTagout #MaterialHandlingSafety #NearMissPrevention

What is The Future for Cities?
What keeps us happy in and attached to place? Jeff Siegler (404I trailer 1)

What is The Future for Cities?

Play Episode Listen Later Feb 14, 2026 2:25


Are you interested in the health of individuals and communities? What do you think about the effect of the environment on us? How can we encourage human behaviour change for better urban futures? Trailer for episode 404 - interview with Jeff Siegler, founder of Revitalize, or Die and author of Your City is Sick book. We will talk about his vision for the future of cities, individual and community health, how the environment affects us, feelings and technical details, and many more.Find out more in the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠episode⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.Episode generated with ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Descript⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ assistance (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠affiliate link⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠).Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Lesfm ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠from ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Pixabay⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

those F%#KING fangirls
#150 | BRIDGERTON IS FOR THE GIRLS WHO YEARN: season 4 part 1 breakdown

those F%#KING fangirls

Play Episode Listen Later Feb 13, 2026 105:46


Christine Riccio & Natasha Polis talk all things nerdy in the book, tv, movie, pop culture, fandoms, and how they integrate into their adult lives. Today they're discussing all Bridgerton season 4 part 1! Plus they chat about Taylor Swift's Opalite music video, Traitors, the Olympics, Project Hail Mary and more! Today in Fangirl Tea Time: Join Christine and Natasha for more stories about their recent life escapades. Support the pod by joining the Forking Fangirls Patreon community: http://patreon.com/thoseforkingfangirls TEAM EDWARD: The first four Heated Rivalry episode commentaries are up now! MAIN DISCUSSION STARTS AT: 46:57Tea time starts at: 1:45:46Follow the visual show on our Youtube: http://youtube.com/@thoseforkingfangirls  Get Christine's new book THIRTY, FLIRTY, & FOREVER ALONE: https://www.amazon.com/dp/1662532156We'll be at LOVE LIT CON in San Diego! https://lovelit.com/Our TFF Panel with Christina Lauren will be at 2:15 pm on Friday!!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Therapy Gecko
GECKMAIL: “MY BOYFRIEND IS ATTACHED TO HIS MOTHER”

Therapy Gecko

Play Episode Listen Later Feb 11, 2026 74:25 Transcription Available


On this geckmail we read emails from a woman who’s boyfriend is too attached to his mother, a guy who’s speech impediment is affecting his social life, a judgmental girlfriend, a timid man with 3 girlfriends, and other stuff written by real human beings who are alive.
It is time to eat a log. I am a gecko. Send an email to therapygeckomail@gmail.com to maybe have it possibly read on the show potentially. Get notified for when I come to your city to do a live gecko show: therapygeckotour.com GET BONUS EPISODES: therapygecko.supercast.com FOLLOW ME ON GECKOGRAM: instagram.com/lyle4ever GET WEIRD EMAILS FROM ME SOMETIMES BY CLICKING HERE.Follow me on Twitch to get a notification for when I’m live taking calls. Usually Mondays and Wednesdays but a lot of other times too. twitch.tv/lyleforeverSee omnystudio.com/listener for privacy information.

Levelheaded Talk
02-11-2026 The Emotion Attached to Our Actions

Levelheaded Talk

Play Episode Listen Later Feb 11, 2026 10:21


Dr. Vitz talks about the difference in our behaviors being the emotion and motive attached. (Originally aired 07-17-2024)

Spiritual Changemakers
EP 104: I Stopped Running - How I Learned to Stay in Love, Conflict, and Truth

Spiritual Changemakers

Play Episode Listen Later Feb 11, 2026 33:41


I Stopped Running: How I Learned to Stay in Love, Conflict, and TruthFor most of my life, I ran from hard conversations, emotional discomfort, and relationship conflict.In this episode, I share how I learned to stay.I talk about anxious attachment, fear of conflict, people-pleasing, boundaries, and how integrating my feminine and masculine energy transformed the way I love, communicate, and lead.This episode is for you if you've ever:– Wanted to escape when things got hard– Felt unsafe expressing your truth– Struggled with guilt around boundaries– Attached too quickly in relationships– Betrayed yourself to keep peaceYou'll learn how healing your nervous system and inner world allows you to show up grounded, secure, and authentic — in love and in life.00:00 – Welcome & Intent01:15 – Running from Relationships04:10 – Childhood Roots of Conflict Avoidance07:20 – Awareness & Pattern Breaking09:45 – The Power of Pausing12:30 – Learning to Set Boundaries15:40 – Dating from Wholeness18:50 – Conscious Uncoupling22:30 – Attraction Without Attachment24:00 – Real-Life Boundary Story27:30 – Communication & Safety30:00 – Living from Love, Not Fear32:20 – Closing MessageConnect with me:

Back to The Music
【Hold on】Version 2 | ZNSE 1858| Music | Praise the Lord 2026 | Zion New Song English

Back to The Music

Play Episode Listen Later Feb 10, 2026 5:11


Mark Reardon Show
Brianna Lyman Reacts to the Super Bowl Halftime Show & the Democrats Woke Messaging Attached to it

Mark Reardon Show

Play Episode Listen Later Feb 9, 2026 10:58


In this segment, Mark is joined by Brianna Lyman, a Columnist with The Federalist. She discusses the Super Bowl Halftime Show, the criticism and woke messaging that was attached.

those F%#KING fangirls
#149 | Heated Rivalry is All Consuming

those F%#KING fangirls

Play Episode Listen Later Feb 6, 2026 110:28


Christine Riccio & Natasha Polis talk all things nerdy in the book, tv, movie, pop culture, fandoms, and how they integrate into their adult lives. Today they're discussing all things Heated Rivalry with special guest Francis Dominic!!! Plus they chat Ted Lasso, Jury Duty season 2, and more!Today in Fangirl Tea Time: Join Christine and Natasha for more stories about their recent life escapades. Support the pod by joining the Forking Fangirls Patreon community: http://patreon.com/thoseforkingfangirls TEAM EDWARD: The first two Heated Rivalry episode commentaries are up now and three is going live very soon! MAIN DISCUSSION STARTS AT the top of the episode! Snap crackle pop culture news start at: 1:27:29FIND FRANCIS on the inter webs: https://www.instagram.com/francisdominiic/Follow the visual show on our Youtube: http://youtube.com/@thoseforkingfangirls  Get Christine's new book THIRTY, FLIRTY, & FOREVER ALONE: https://www.amazon.com/dp/1662532156We'll be at LOVE LIT CON in San Diego! https://lovelit.com/Our TFF Panel with Christina Lauren will be at 2:15 pm on Friday!!

Daily Rosary
February 2, 2026, Feast of the Presentation of the Lord, Holy Rosary (Joyful Mysteries)

Daily Rosary

Play Episode Listen Later Feb 2, 2026 27:55


Friends of the Rosary,Today, February 2, the Catholic Church celebrates the Feast of the Presentation of the Lord.This celebration, which takes place forty days after the birth of Jesus, is also known as Candlemas Day, since the blessing and procession of candles are included in the Mass.Christ is the light of the nations, hence the blessing and procession of candles on this day.Jesus' presentation signifies God's entrance to His temple. Soon after the Baptists' birth, God made man enter His temple, presenting Himself to those who were truly seeking Him.Attached to “Candlemas Day”, we also celebrate the World Day of Prayer for Consecrated Life, founded by Pope St. John Paul II in 1997. That's because the consecrated men and women are to be the light in the world, imitating Jesus, the Light of the World.On this day, the Church expresses its gratitude to all in the community who dedicate themselves in a special way to prayer, and to those with a particular religious vocation to the contemplative life.In the figures of Simeon and Anna, Jesus' presentation in the temple reminds us that prayer and contemplation are well-spent time. Only those who pray and offer penance, like Simeon and Anna, are open to the breath of the Spirit.This feast of the Presentation has a strong Marian dimension:On one hand, Simeon's prophecy emphasizes Mary's sufferings. Pope John Paul II taught that, “Simeon's words seem like a second Annunciation to Mary.”In the previous Liturgical Calendar, it was called the Purification of the Blessed Virgin Mary. indicating the renewal of her total offering to God for the accomplishment of His Divine Plan.On February 2nd, a secular tradition unfolds: Groundhog Day, well known to schoolchildren and adults alike. The fate of Spring hangs in the balance as a burrowing animal looks for its shadow.Ave Maria!Come, Holy Spirit, come!To Jesus through Mary!Here I am, Lord; I come to do your will.Please give us the grace to respond with joy!+ Mikel Amigot w/ María Blanca | RosaryNetwork.com, New YorkEnhance your faith with the new Holy Rosary University app:Apple iOS | New! Android Google Play

those F%#KING fangirls
#148 | OUR TOP 10 TV SHOWS OF 2025

those F%#KING fangirls

Play Episode Listen Later Jan 30, 2026 105:35


Christine Riccio & Natasha Polis talk all things nerdy in the book, tv, movie, pop culture, fandoms, and how they integrate into their adult lives. Today they're going through their top 10 tv shows of 2025! Plus they're chatting Oscar nominations, Taylor Swift, Harry Styles, Ponies, Twinless, and the upcoming America's Next Top Model documentary! Today in Fangirl Tea Time: Join Christine and Natasha for more stories about their recent life escapades. Support the pod by joining the Forking Fangirls Patreon community: http://patreon.com/thoseforkingfangirls TEAM EDWARD: a new bonus AMA is up now, and the first two Heated Rivalry episode commentaries are up as well!MAIN DISCUSSION STARTS AT: 38:00Follow the visual show on our Youtube: http://youtube.com/@thoseforkingfangirls  Get Christine's new book THIRTY, FLIRTY, & FOREVER ALONE: https://www.amazon.com/dp/1662532156We'll be at LOVE LIT CON in San Diego! https://lovelit.com/Our TFF Panel with Christina Lauren will be at 2:15 pm on Friday!!

The Last American Vagabond
Ghislaine Claims “Co-conspirators” Are Being “Protected” By DOJ & Using ICE To Divide And Conquer

The Last American Vagabond

Play Episode Listen Later Jan 30, 2026 209:35 Transcription Available


Welcome to The Daily Wrap Up, an in-depth investigatory show dedicated to bringing you the most relevant independent news, as we see it, from the last 24 hours (1/30/26). As always, take the information discussed in the video below and research it for yourself, and come to your own conclusions. Anyone telling you what the truth is, or claiming they have the answer, is likely leading you astray, for one reason or another. Stay Vigilant. !function(r,u,m,b,l,e){r._Rumble=b,r[b]||(r[b]=function(){(r[b]._=r[b]._||[]).push(arguments);if(r[b]._.length==1){l=u.createElement(m),e=u.getElementsByTagName(m)[0],l.async=1,l.src="https://rumble.com/embedJS/u2q643"+(arguments[1].video?'.'+arguments[1].video:'')+"/?url="+encodeURIComponent(location.href)+"&args="+encodeURIComponent(JSON.stringify([].slice.apply(arguments))),e.parentNode.insertBefore(l,e)}})}(window, document, "script", "Rumble");   Rumble("play", {"video":"v72w4wg","div":"rumble_v72w4wg"}); Video Source Links (In Chronological Order): Netanyahu: Israel Will Have Control from ‘River to the Sea' Including Gaza - News From Antiwar.com (16) Justin Amash on X: "This exceeds 3% of Gaza's population. People often scale these figures to the U.S. equivalent (which I find misleading)—but to use that approach, it would equate to more than 11 million Americans." / X (16) Adil Haque on X: ""USAID staffers in early 2024 drafted a warning to senior officials in Joe Biden's administration: Northern Gaza had turned into an “Apocalyptic Wasteland” with dire shortages of food and medical aid." "But the U.S. ambassador to Jerusalem, Jack Lew, ... blocked the cable" https://t.co/IHPByyvbI5" / X (23) Muhammad Shehada on X: "

Soteria Prophetic Ministries
Rejection or Rescue? When God Ends What You Were Too Attached to Release

Soteria Prophetic Ministries

Play Episode Listen Later Jan 30, 2026 2:51


Sometimes what feels like rejection is actually divine intervention. In this episode, Dr. Delisa unpacks the uncomfortable truth that God often removes what we are too emotionally attached to release. Doors closing, relationships ending, platforms shifting, and seasons expiring are not always signs of failure. Many times, they are evidence of God's protection and precision. Drawing from 1 Samuel 15:35 and Romans 8:28, Dr. Delisa explores how rejection exposes misalignment while divine intervention restores direction. We talk candidly about pruning, separation, and why God closes doors we keep trying to reopen. This message will help you reframe loss, trust God's timing, and recognize when an ending is actually mercy in disguise. If you have been struggling to understand why something ended that you prayed would last, this episode will bring clarity, healing, and alignment. Trust the separation. Honor the pruning. God is making room for what must come next. Scriptures referenced include 1 Samuel 15:35 KJV, Romans 8:28 KJV, and John 15:2 KJV.

The Heart of the Matter
Long Distance Romance With Avoidant Attached

The Heart of the Matter

Play Episode Listen Later Jan 29, 2026 57:41


What happens when you've been in a long distance romance with an avoidant attached person and it feels intense even if you've just met a handful of times? You want answers...What is happening here? Can this be anything more? Why do they ghost me and return? As Sarah shares her stories, we try to find her answers that would soothe her troubles heart. I would love to hear your thoughts on this episode. Support the show

The Secret Teachings
Pogroms Progress PT2 (1/29/26) [PT1 attached]

The Secret Teachings

Play Episode Listen Later Jan 29, 2026 180:01 Transcription Available


Earlier last year, we covered Pogroms Progress PT1, focusing on data from Pew indicating the entire world is turning on Israel over Gaza A year later, countries and citizens from all over the world are turning on Israeli tourists for their arrogant, smug, demanding, noisy, violent, and sexually perverse behavior. In other cases, locals are turning on Israelis who have been documented starting fires. A global pogrom is coming, if not currently unfolding. This episode features PT2 and then a BEST of PT1 attached to the end. *The is the FREE archive, which includes advertisements. If you want an ad-free experience, you can subscribe below underneath the show description.WEBSITEFREE ARCHIVE (w. ads)SUBSCRIPTION ARCHIVE-X / TWITTERFACEBOOKINSTAGRAMYOUTUBERUMBLE-BUY ME A COFFEECashApp: $rdgable PAYPAL: rdgable1991@gmail.comRyan's Books: https://thesecretteachings.info- EMAIL: rdgable@yahoo.com / rdgable1991@gmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-secret-teachings--5328407/support.

Dr. Fred Clary's Podcast
Trauma Bonding at a Societal Level: Why Chaos Can Make People Emotionally Attached to What's Hurting Them

Dr. Fred Clary's Podcast

Play Episode Listen Later Jan 29, 2026 18:31


Trauma Bonding at a Societal LevelTrauma bonding at a societal level occurs when entire communities become emotionally attached to ongoing stress, chaos, and threat through repeated cycles of fear and temporary relief. Constant exposure to crisis-driven narratives keeps the nervous system in a heightened state of activation, where cortisol remains elevated and the brain's threat centers dominate decision-making. In this state, people often bond not to peace or truth, but to the very sources of stress that intermittently offer reassurance, identity, or meaning. Over time, this creates emotional dependence on narratives, movements, or media ecosystems that feel familiar and validating—even when they are harmful.Neurologically and physiologically, societal trauma bonding erodes clarity and resilience. The prefrontal cortex becomes less effective, nuance disappears, and group identity replaces independent discernment. Communities begin to mirror trauma responses seen in individuals: rigidity, hypervigilance, emotional reactivity, and fear of separation from the group. Healing begins when individuals restore nervous system regulation, reconnect to local reality, and reclaim rhythm, coherence, and embodied presence. Calm, grounded truth—rather than outrage—becomes the antidote that slowly dissolves trauma bonds and allows cultures to recover stability and compassion. Dr. Fred Clary, founder of Functional Analysis Chiropractic Technique and lifting/life coach/ gym-chalk covered philosopher talks about Community Gaslighting! 

Just Spitballin Podcast.
Just Spitballin Podcast Seasonn 8 Episode 262: HNIC and OHNIC (with bonus spittin attached)

Just Spitballin Podcast.

Play Episode Listen Later Jan 28, 2026 149:01


Sometimes you just gotta check in.In Episode 262, Luap and Chop—the HNICs of JSE—kick back for a long overdue catch-up. No big topic list, no pressure, just real conversation. Life updates, random thoughts, laughs, and the kind of talk that only happens when the mics are on and the vibes are right.If you've been rocking with Just Spitballin for a while, this one feels like home.If you like what you are hearing be sure to Follow our social media:Facebook: Just Spitballin Ent.Twitch: JustSpitballinTTVTwitter: @JSpitballinInstagram: justspitballin_ent

Tracy Crossley's Podcast
#781: Where is the Fun in Being Attached

Tracy Crossley's Podcast

Play Episode Listen Later Jan 27, 2026 16:04


You think it's the person. The weight. The job. The relationship status. You think if you just solve that one thing, you'll finally feel okay inside. But here's what's actually happening: you're using that external problem to avoid the deeper feelings you don't want to touch. If you wait for everything to be okay before you have fun, you're going to be waiting till the 12th of never. Give yourself a fucking break. Start changing what you can actually change. In this episode, Tracy explores: * Why solving external problems won't fix your insides * How to change your energy and relationship to situations * Escaping your feelings vs. connecting to them * Why fun feels dangerous * A simple tool to get back in your body "It's safer to feel like shit than it is to feel great." ~ Tracy Crossley

escaping attached tracy crossley
those F%#KING fangirls
#147 | We're in 2026 - but the bell of the ball is 2016

those F%#KING fangirls

Play Episode Listen Later Jan 23, 2026 130:21


Christine Riccio & Natasha Polis talk all things nerdy in the book, tv, movie, pop culture, fandoms, and how they integrate into their adult lives. Today they're ringing in the start of 2026 by discussing the 2016 trend, and all the content that dropped over the holiday break! They're chatting People We Meet on Vacation, the golden globes, Ponies, Emily in Paris, Heated Rivalry, Traitors, Stranger Things, the Taylor Swift doc, and more!Today in Fangirl Tea Time: Join Christine and Natasha for more stories about their recent life escapades. Support the pod by joining the Forking Fangirls Patreon community: http://patreon.com/thoseforkingfangirls TEAM EDWARD: new bonus AMA is up now, and the first Heated Rivalry episode commentary will be up shortly if it's not already!MAIN DISCUSSION STARTS AT: 1:35:57Follow the visual show on our Youtube: http://youtube.com/@thoseforkingfangirls  Get Christine's new book THIRTY, FLIRTY, & FOREVER ALONE: https://www.amazon.com/dp/1662532156We'll be at LOVE LIT CON in San Diego! https://lovelit.com/Our TFF Panel with Christina Lauren will be at 2:15 pm on Friday!!

RELIGIOUS LIBERTY REPORT
199 - MARCH FOR LIFE - THE 4-14 WINDOW - PROTESTS IN CHURCH - AI-POWERED RELIGION

RELIGIOUS LIBERTY REPORT

Play Episode Listen Later Jan 23, 2026 29:02


Dear RLR Listeners,Attached is Religious Liberty Report Episode 199 where we cover the 2026 MARCH FOR LIFE - THE 4-14 WINDOW - PROTESTS IN CHURCH - AI-POWERED RELIGION.Thank you for your valuable support. Sincerely,Alexander+1 (305) 450-8550aalfano@lawalfano.com 

Rumble in the Morning
Stupid News 1-21-2026 8am …He got his ankle monitor off and attached it to a stray dog

Rumble in the Morning

Play Episode Listen Later Jan 21, 2026 10:48


Stupid News 1-21-2026 6am …How did they not know? …He got his ankle monitor off and attached it to a stray dog

Creators Table with Drew Cost
Closing Chapters Without Closure: When God Says “It's Finished” but You're Still Attached

Creators Table with Drew Cost

Play Episode Listen Later Jan 21, 2026 11:36


Some chapters don't end because something broke.They end because they're finished.In this episode, we explore what it really means to close chapters without closure—when God calls you forward but your emotions, identity, or sense of responsibility are still attached to what once worked.This conversation is for anyone wrestling with letting go without closure, navigating spiritual growth, or feeling the tension between faith and obedience when nothing is technically “wrong.” We talk honestly about identity and seasons, burnout in ministry, and the quiet moments where you're hearing God's voice say, “Move on,” even when you don't fully understand why.You'll hear why obedience without understanding is often harder than walking away from something painful, how closure can become a form of control, and why personal growth sometimes requires releasing roles, rhythms, or expectations that once served you well.This is a faith-based podcast episode for leaders, believers, and anyone navigating emotional detachment, leadership transitions, or that unsettling moment when God says move on—but your heart hasn't caught up yet.If this episode resonated with you, you don't have to process this season alone.The Breakthrough Community is a space for honest conversations around faith, identity, leadership, and personal growth—especially for those closing old chapters and stepping into new ones without needing to perform or explain themselves.It's a grounded, supportive community for people pursuing clarity, healing, and obedience in real life.You can learn more or join here:

Denver Real Estate Investing Podcast
#599: 2026 Denver Small Multifamily Listings Jump 300% In One Week

Denver Real Estate Investing Podcast

Play Episode Listen Later Jan 20, 2026 43:50


The Denver December 2025 market update reveals a shifting landscape for real estate investors. Inventory ended the year at 7,600 active units – up 10% from December 2024 but down sharply from November’s 10,500 units as sellers pulled listings heading into the holidays. The bigger story? Attached properties (condos and townhomes) surged 20% year-over-year while detached homes stayed relatively flat, signaling where market pressure is building. Then the new year arrived and everything accelerated. Chris Lopez hosts Troy Howell from Nova Home Loans and Jeff White from Envision Advisors to cover Denver’s December 2025 market update. The panel covers Denver metro year-end trends, interest rate movements, and what just happened in the first week of the new year. Over 20 small multifamily properties hit the market in just the first 8 days of January – an unusual flood of inventory during the worst season to sell. Troy reveals interest rates dropped nearly a full percentage point year-over-year (from 7.04% in January 2025 to 6.16% in January 2026) with predictions for continued decline, while data shows 6%+ mortgages now outnumber sub-3% loans nationwide, signaling the lock-in effect may finally be breaking. The panel digs into what December’s inventory patterns mean for 2026 buying opportunities, examining why motivated sellers are listing in winter and how this creates negotiation leverage. Jeff conducts live underwriting of a $750K 4-plex near South Broadway that dropped $139K in price, walking through actual spreadsheet analysis comparing house hacking (5% down, 9.39% cash-on-cash return) versus traditional investing (25% down, 5.75% return). Both strategies dramatically outperform the 1-2% market average most investors are seeing, proving cash flow still exists in Denver’s current market conditions. Watch the Youtube Video https://youtu.be/zKNDot-SdjE In This Episode We Cover: December 2025 inventory recap: 7,600 units (up 10% YoY from Dec 2024), why attached properties jumped 20% while detached stayed flat Why 20+ small multifamily listings flooded Denver in January 2026’s first 8 days during the worst selling season Interest rate trends: Down from 7.04% (Jan 2025) to 6.16% (Jan 2026), with VA loans reaching low 5% range How the lock-in effect is ending as 6%+ mortgages now exceed sub-3% mortgages nationwide Live underwriting showing $750K 4-plex delivering 9.39% returns for house hackers vs 5.75% for investors Colorado Springs new construction duplex deal with 100% VA financing and 12-month occupancy flexibility Why properties are selling at 2018-2019 price levels and what this means for long-term investors December’s data confirms inventory is building but hasn’t reached problematic levels – we’re still well below the 15,000-30,000 units seen during the 2008-2012 period. The seasonality cliff from 14,000 summer units down to 7,600 by year-end is normal, but what’s not normal is the January 2026 surge of motivated sellers listing during peak winter. Troy explains how current rates make deals pencil again after years of struggle, while Jeff’s spreadsheet analysis proves the math works for both house hackers and traditional investors. Subscribe to our reactivated deal alert emails and join our February 2026 webinar for deeper small multifamily analysis as we track how this inventory surge plays out through the year. Timestamps 00:00 – Welcome & New Year Market Update Introduction 01:43 – December Inventory Analysis: 7,600 Active Units Up 10% Year Over Year 04:15 – Why Attached Properties Jumped 20% While Detached Stayed Flat 07:15 – The January Flood: 20+ Small Multifamily Listings in 8 Days 12:47– Live Deal Analysis: $750K 4-Plex Near South Broadway (Dropped $139K) 16:23 – House Hacking Numbers: Live in Your Unit for $1,338/Month 19:20 – Investor Analysis: 5.75% Cash-on-Cash vs 1-2% Market Average 25:28 – New Construction Duplex Deal: 100% VA Financing in Colorado Springs 27:19 – VA Loan Occupancy Rule: 12 Months vs 60 Days for Conventional 33:12 – Interest Rate Update: 6.16% Down from 7.04% One Year Ago 35:06– Mortgage Lock-In Effect Ending: 6%+ Loans Now Exceed Sub-3% Mortgages 36:38 – Trump Proposes Ban on Institutional Single-Family Home Buyers Connect with our Guests: Jeff White: jeff@envisionrea.com Troy Howell: troy.howell@novahomeloans.com LinkedIn: Troy Howell Website: https://www.novahomeloans.com/loan-officer/troy-howell/ Links in Podcast For the First Time in Years, More Homeowners Have a 6% Mortgage Rate than a 3% One Subscribe to our Reactivated Deal Alert Emails Download the Free House Hacking Spreadsheet Who is Keyrenter? Keyrenter Property Management Denver provides rental solutions for homeowners and real estate investors in the metro area who are interested in transforming their properties into passive income. It offers various services, from property marketing and thorough applicant screening to tenant placement and 24/7 maintenance services. Keyrenter Denver's team of experts can take the clients’ burden of managing their rental off their hands so they can get back to what matters to them. Who is Nova Home Loans? For over 40 years, we've been focused on helping homeowners find the perfect loan to fit their financial needs and personal goals. Working with NOVA is a personalized experience from initial application to final loan closing and beyond. We will be with you every step of the way toward successful homeownership. Start working with NOVA & Troy Howell today! NOVA FINANCIAL & INVESTMENT CORPORATION, DBA NOVA HOME LOANS NMLS 3087/ EQUAL HOUSING OPPORTUNITY/8055 EAST TUFTS AVENUE, SUITE 101/DENVER, CO

It's the Bottom Line that Matters Podcast
How to Vet Business Opportunities Without Getting Emotionally Attached

It's the Bottom Line that Matters Podcast

Play Episode Listen Later Jan 20, 2026 24:41


On this episode of It's The Bottom Line that Matters, hosts Jennifer Glass, Daniel McCraine, and Patricia Reszetylo dive deep into the art and strategy of vetting business opportunities—without letting emotions cloud your judgment. From personal stories of jumping too quickly into deals to considering the hidden costs, alignment, and the people behind the opportunity, the conversation covers essential criteria every entrepreneur should consider before saying yes (or no).Explore how evaluating business opportunities isn't just about finances and fit, but also about the impact on your overall freedom, business trajectory, and long-term success. Whether you're looking at new partnerships, expanding your services, or considering a startup, this episode provides practical insights on asking the right questions, recognizing red flags, and making decisions that move your business forward.Tune in to hear real-world experiences, thoughtful debate, and expert advice—all aimed to help you make smarter decisions for your bottom line.Listen now and learn how to vet business opportunities with strategy, discernment, and confidence.About the hosts:Jennifer Glass sets the tone for the "It's The Bottom Line That Matters" podcast, guiding listeners through the nuances of making business decisions with strategy and clarity. Jennifer's journey reflects someone who is not afraid to leap into new opportunities, even if it means stepping outside her comfort zone. She credits her willingness to join coaching groups and mastermind programs with shaping her network, career, and ultimately bringing together the podcast co-hosts. Through her experiences—whether purchasing a mastermind or integrating services that align with her business—Jennifer emphasizes the importance of thinking strategically, paying attention to connections, and always considering if an opportunity fits her vision of freedom.Daniel McCraine is a consultant with a flair for evaluating business opportunities, sometimes jumping quickly, as with his story about acquiring a robocalling company. He candidly discusses the lessons learned from opportunities that didn't pan out, stressing the importance of alignment, resources, and strategic fit. Daniel's openness to new ventures, even when they fit “hand in glove,” is balanced by his wisdom to walk away when things just aren't right. He brings a practical lens, reminding listeners that sometimes saying “no” to even good opportunities is part of being a successful entrepreneur.Patricia Reszetylo brings a reflective and experiential approach to business growth. She shares how joining a coaching consortium challenged her on multiple levels and, despite not being fully prepared for the path, she views the experience as a stepping stone—one that led to meaningful relationships and new career directions. Patricia focuses on the people behind business opportunities, recognizing that the nature of collaboration and partnership can make or break ventures. Her insights encourage listeners to consider not just the business models but also the personalities and teams involved.Together, Jennifer Glass, Daniel McCraine, and Patricia Reszetylo use their personal stories and hard-earned lessons to help others make wise choices when vetting business opportunities. Their shared message: think strategically, evaluate deeply, and surround yourself with the right people for success.Keywords: business opportunities, vetting opportunities, emotional decision making, business expansion, hiring decisions, business acquisitions, marketing tools, business alignment, startup challenges, resource allocation, opportunity cost, evaluating opportunities, financial investment, customer base, partnerships, joint ventures, mastermind groups, coaching consortium, product expansion, review management, business growth, risk management, strategic decision making, saying no, opportunity evaluation criteria, relationship with partners, business trajectory

The Aaron Doughty Podcast
EP#787 If you're trying to let go but still attached, please watch this…

The Aaron Doughty Podcast

Play Episode Listen Later Jan 19, 2026 31:39


Trying to let go can actually be the thing keeping you attached. In this episode, I explain how codependency, fixing, and overthinking block real connection. When you come back into your own energy, letting go becomes natural. If you want to join my next in-person live event in Los Angeles on January 31st–February 1st and step into the most magnetic version of yourself, grab your ticket here: https://www.theshiftexperience.com/la 

Audio Dharma: Gil Fronsdal's most recent Dharma talks
Dharmette: Love (9) Non-Attached Love and Grief

Audio Dharma: Gil Fronsdal's most recent Dharma talks

Play Episode Listen Later Jan 15, 2026


This talk was given by Gil Fronsdal on 2026.01.15 at the Insight Meditation Center in Redwood City, CA. ******* Video of this talk is available at: https://www.youtube.com/live/fups8oBeEVQ?si=NRASkIMC8LELqG07&t=1860. ******* For more talks like this, visit AudioDharma.org ******* If you have enjoyed this talk, please consider supporting AudioDharma with a donation at https://www.audiodharma.org/donate/. ******* This talk is licensed by a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License

video grief attached redwood city gil fronsdal insight meditation center
C3 Church San Diego // AUDIO
No String Attached - Ps. Michael Hundley

C3 Church San Diego // AUDIO

Play Episode Listen Later Jan 14, 2026 32:58


Freedom is one of the greatest gifts we have been given as children of God, but many Christians are not living free. In this powerful message filled with worship Ps. Michael provides some insight on how to break free and stay free with a heavenly purpose.

Design Your Destiny
Is Someone Safe to Trust? How to Know Before You Get Attached

Design Your Destiny

Play Episode Listen Later Jan 13, 2026 43:33


Is someone safe to trust—or does something feel off that you can't quite explain? This is the question people aren't asking out loud… But they are asking at 2am. In this episode, I sit down with Joseph McGuire, a facial reader and body language expert with a background in martial arts, somatic work, and Eastern diagnostic systems. This is not a conversation about "catching liars" or reading faces to judge people. It's a conversation about discernment, nervous system safety, and identity-level patterns that quietly determine who we trust—and why we keep repeating the same relationship dynamics. Many people assume repeating relationship patterns mean they're making bad choices. In reality, the subconscious mind is wired for familiarity, not safety. The nervous system will often choose what it knows—even when what it knows is painful. In this conversation, we explore: Why the brain confuses familiarity with emotional safety How early conditioning shapes who feels "right" to trust The difference between intuition and fear-based reactions Why eye contact and presence matter more than words How people unknowingly give away power in relationships Why "being nice" can override self-protection How identity patterns repeat until they're interrupted We also talk about subtle but important cues to pay attention to when meeting someone new—especially when you feel uneasy but can't logically explain why. Learning how to tell if someone is safe to trust isn't about becoming hyper-vigilant or suspicious. It's about reconnecting with your own internal signals. When your nervous system is regulated and your sense of self is grounded, discernment becomes natural. You stop explaining red flags away. You stop rescuing or over-functioning. You stop outsourcing your safety. And connection becomes simpler—not more complicated. Connect with Joseph  on LinkedIn  If you're noticing repeating patterns in relationships, trust, or self-abandonment—and you want to understand how the subconscious mind learns safety and familiarity—Hypnosis Secrets Unlocked is a two-day live workshop where I break this down clearly and ethically. You'll learn: How the mind wires emotional safety Why insight alone doesn't interrupt identity loops How hypnosis works at the pattern level, not the surface Reserve your spot HERE. 

X22 Report
Panic Everywhere,[DS] World Is Coming To An End,Message Sent,Patriots Are In Control – Ep. 3811

X22 Report

Play Episode Listen Later Jan 5, 2026 88:22


Watch The X22 Report On Video No videos found (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:17532056201798502,size:[0, 0],id:"ld-9437-3289"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="https://cdn2.decide.dev/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");pt> Click On Picture To See Larger PictureThe [CB] system is being dismantled, Trump getting control of the oil will begin to bring prices down further, once Iran has regime change, it is game over for the [DS]/[CB] system. Gas prices will fall further when the US begins to drill. The [CB] debt is in violation of the constitution and most it will most likely be wiped out and the [CB] will cease to exist. The [DS] is panicking, from dictators, fake news and the D’s they are all panicking. The [DS] world is now coming to and end and it is being exposed and dismantled for the world to see. The [DS] is no longer in control, the patriots are. Trump and team sent a clear message, everything you are seeing is to return the power back to the people. Economy (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:18510697282300316,size:[0, 0],id:"ld-8599-9832"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="https://cdn2.decide.dev/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs"); https://twitter.com/KobeissiLetter/status/2007823029846372858?s=20 https://twitter.com/Geiger_Capital/status/2008196746653151644?s=20 https://twitter.com/echodatruth/status/2008056541627228502?s=20   to $1 TRILLION in Latin American precious metals, including Venezuelan supply. Let that sink in. An $8 BILLION state-of-the-art facility, jointly backed by Wall Street capital and the U.S. Department of Defense, now sits at the center of the supply chain. This isn't about invasion. This is about control, security, and price discovery. • Physical metals moving out of unstable regions • Refining brought back under U.S. oversight • Paper markets losing influence • Strategic metals secured for energy, defense, and AI When governments build first and explain later, it's not speculation, it's preparation. Silver isn't being hyped. It's being positioned. Know What You Hold.  https://twitter.com/profstonge/status/2008176575833948484?s=20  roads 4. Bankruptcy, counterfeiting, piracy laws 5. Patents and copyrights 6. Regulate commerce with foreign nations, between states, and with Native tribes 7. Declare war; maintain army, navy, and militia 8. Establish lower federal courts 9. Exercise authority over Washington, D.C. That means roughly 80% of federal spending is, in fact, illegal. Political/Rights https://twitter.com/FBIDirectorKash/status/2007937505296093357?s=20   (up 31%) enough to kill 130 million Americans -Nihilistic Violent Extremism arrests up 490% -Over 6,000 child victims located (up 22%) -Espionage arrests up 35% -Multiple successful surges including Summer Heat which had almost 9,000 arrests in just three months This FBI is saving lives, protecting innocent kids, and taking deadly drugs off our streets at levels not seen in decades. None of it would've been possible without Dan's leadership and support. And he paved the way for even better things to come. Thank you @dbongino .  https://twitter.com/PressSec/status/2008177002608779675?s=20 DOGE Geopolitical https://twitter.com/jsolomonReports/status/2007493457338605628?s=20 https://twitter.com/Leon4Congress/status/2007969020352647528?s=20  2020 indictments, $15 million bounty, and expanded sanctions In 2022, President Biden increased the then-$15 million bounty on Maduro to $25 million. 25million for anyone who can deliver Maduro to America. 2026 Trump executes the orders of Obama and Biden. Who is the joker, hero or villain? Obama , Biden or Trump https://twitter.com/amuse/status/2008198931985879499?s=20  to power. Why? https://twitter.com/robbystarbuck/status/2008061863565852729?s=20 https://twitter.com/mattvanswol/status/2007919000773353481?s=20   https://twitter.com/ElectionWiz/status/2008155905880453463?s=20 https://twitter.com/ColonelTowner/status/2007827528711590045?s=20  https://twitter.com/WallStreetMav/status/2008188125617569887?s=20   start taking back its deported gang members. https://twitter.com/ElectionWiz/status/2007988528677052517?s=20 https://twitter.com/DerrickEvans4WV/status/2008083325802696896?s=20 https://twitter.com/RapidResponse47/status/2008032031876202758?s=20 https://twitter.com/ElectionWiz/status/2008176950427423164?s=20   Trump wants to make a deal with Mexico like he did with the Nigerian government. The cartels are going to be eradicate https://twitter.com/robbystarbuck/status/2007990748910682257?s=20   grandparents, etc. It's been a dream they prayed to witness. 3/4 of my grandparents didn't survive to see it. Attached are some photos of my Grandpa Julio “Papi” who's alive still and my deceased Grandma Martha in Cuba during better times as young love birds. Fidel Castro stole everything but their love and their lives. Same with my other grandparents Rafael and Ophelia and my Mom. They lost everything but their love and their lives. Now there's hope of a free Cuba for our long lost family there and hope of making past wrongs right once again. I'm with President Trump all the way. Cuba should be a rich, island paradise and it can be as a US territory. It's a strategic asset for our safety too as a base of operations to defend our homeland in the mainland US. There's no downside to toppling the communists who've only stayed in power by killing and jailing Cubans for decades. Now is the time. It can also serve as a helpful spot to run any US/Venezuela operations that benefits America instead of a narco pass through entity used by our enemies as a constant threat to American safety. Russia, China, Venezuela and many others have used Cuba to threaten us for long enough. It's time we take control and empower the Cuban people. No American blood needs to be spilled. This can be a massive win for the future of both Cuba and more importantly, for America. It's time for the evil of communism to die. https://twitter.com/AwakenedOutlaw/status/2007882386529542519?s=20 https://twitter.com/FaytuksNetwork/status/2008187454595969240?s=20   rials monthly ($7). https://twitter.com/AwakenedOutlaw/status/2007930486438682861?s=20 https://twitter.com/RyanSaavedra/status/2007978922458444265?s=20   longer had it. He did something and saw the consequences.” The message: Leave now. Ayatollah Khamenei plans to flee to Moscow if Iran unrest intensifies The republic's supreme leader has plotted an exit route out of Tehran should his forces fail to quell dissent, an intelligence report reveals https://twitter.com/disclosetv/status/2008206247808700734?s=20 War/Peace Medical/False Flags [DS] Agenda https://twitter.com/remarks/status/2007947270910841313?s=20 https://twitter.com/EndWokeness/status/2008031475057439076?s=20   Weaver outline how homeowners will need to modify their view on their property ownership to reflect a new municipal perspective that considers all individually owned property to be part of a new collective property viewpoint as controlled by city government. “For centuries we really treated property as an individualized good and not a collective good, in transitioning into treating it as a collective good and towards the model of shared equity … it will mean that families, especially White families … are going to have a different relationship to property than the one that we currently have.” It is likely that Mayor Mamdani and Director Weaver are going to run into some stiff legal opposition as they try to reimagine a world where individuals are not allowed to own property.   https://twitter.com/AAGDhillon/status/2008207308950782417?s=20 https://twitter.com/amuse/status/2007866604139225514?s=20   briefings. After 9/11, New York's mayors kept the NYPD commissioner in a direct, daily intelligence loop. That model is now ending. Mamdani has removed the Commissioner Jessica Tisch direct line to his office, relegating police leadership to the same access level as garbage collection. The shift weakens situational awareness at the top & reflects a belief that Islamic terror threats no longer require mayoral focus. https://twitter.com/EricLDaugh/status/2008183851802337656?s=20 https://twitter.com/wcdispatch/status/2008018760746078438?s=20     done, in my opinion, an even more dishonest and incompetent job. NO ONE IS ABOVE THE LAW! Mugshot Emerges of Deranged Man Accused in Vance Home Attack, VP Blasts Media for Publishing Home Images Authorities have released the mugshot of 26-year-old William DeFoor following his arrest for allegedly attempting to break into Vice President JD Vance’s Cincinnati home with a hammer.   The booking photo, posted by the Hamilton County Justice Center, also lists the charges DeFoor is facing, including vandalism, criminal trespass, criminal damaging or endangering, and obstructing official business. Cincinnati police and Secret Service agents responded swiftly to reports of the vandalism, arriving at the scene to detain the man without further incident. No one was injured, as Vance and his family had already left for Washington, D.C. at that time. https://twitter.com/JDVance/status/2008188525162721647?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E2008188525162721647%7Ctwgr%5Ec29f78485445e314b120eda36408e134f4f5245a%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fredstate.com%2Frusty-weiss%2F2026%2F01%2F05%2Fmugshot-emerges-of-deranged-man-accused-in-vance-home-attack-vp-blasts-media-for-publishing-home-images-n2197767   already to DC. One request to the media: we try to protect our kids as much as possible from the realities of this life of public service. In that light, I am skeptical of the news value of plastering images of our home with holes in the windows. Source: redstate.com President Trump's Plan https://twitter.com/SecWar/status/2008189258528665898?s=20   is still accountable to military justice. And the Department of War — and the American people — expect justice. Therefore, in response to Senator Mark Kelly's seditious statements — and his pattern of reckless misconduct — the Department of War is taking administrative action against Captain Mark E. Kelly, USN (Ret). The department has initiated retirement grade determination proceedings under 10 U.S.C. § 1370(f), with reduction in his retired grade resulting in a corresponding reduction in retired pay. To ensure this action, the Secretary of War has also issued a formal Letter of Censure, which outlines the totality of Captain (for now) Kelly's reckless misconduct. This Censure is a necessary process step, and will be placed in Captain Kelly's official and permanent military personnel file. Captain Kelly has been provided notice of the basis for this action and has thirty days to submit a response. The retirement grade determination process directed by Secretary Hegseth will be completed within forty five days. Captain Kelly's status as a sitting United States Senator does not exempt him from accountability, and further violations could result in further action. These actions are based on Captain Kelly’s public statements from June through December 2025 in which he characterized lawful military operations as illegal and counseled members of the Armed Forces to refuse lawful orders. This conduct was seditious in nature and violated Articles 133 and 134 of the Uniform Code of Military Justice, to which Captain Kelly remains subject as a retired officer receiving pay. https://twitter.com/TonySeruga/status/2008201370458075286?s=20  energy, and corporatism, all are reliant on the narcos for dark funding. Just look at how they are treating Maduro? It’s like he is a rock star. Already with 5 ‘costume’ changes just today. Does Maduro look worried?  THE FIX IS IN? YOU CAN'T MAKE THIS UP: 92-Year-Old Clinton Judge Who Denied Trump's Hush-Money Removal to Federal Court and Blocked Venezuelan Gang Deportations Now Assigned to Preside Over Maduro Case in New York President Trump Shuts Down Fake News Reporter Trying to Pit Rubio and Vance Against Each Other (AUDIO)  Trump spoke to reporters aboard Air Force One as he headed back to the White House on Sunday evening after spending the Christmas holiday at Mar-a-Lago in South Florida. President Trump shut down a fake news reporter who was trying to create a wedge between Vice President JD Vance and Secretary of State Marco Rubio.   A legacy media reporter tried to stir up a little trouble and President Trump promptly shut her down. “What you say that Marco Rubio has your ear more than the Vice President right now?” a reporter asked President Trump. Trump shut it down. “No! They both do. JD is very smart and doing a great job and so is Marco! I would say they're equal,” Trump said. The reporter continued, “It sounds like [Rubio] is the go to and you were just talking about Cuba and what could come next there.” AUDIO: Source: thegatewaypundit.com https://twitter.com/AwakenedOutlaw/status/2008092328867869069?s=20  a plea of some sort. In fact, that may well have been pre-negotiated thereby removing the judges ability to thwart the prosecution. These images support as much. https://twitter.com/Rasmussen_Poll/status/2007939030839701667?s=20   election systems currently in use here have been newly examined last year by Federal authorities and are apparently FULL of illegal CCP sourced items – While @DNIGabbard is still withholding her completed official report on this, her boss is now aggressively retweeting older descriptors of evidence against Dominion and our US Election Theft Syndicate in general. This is apparently the overture of what is to come – The Secret Dominion/Huawei Data Center in Belgrade, Serbia – that emphatically and officially did not exist – DID exist and was disabled by U.S. gov employees just days prior to the 2024 election. It has now been dismantled, which may disappoint former CIA Director John Brennan, who reportedly financed half of it from the CIA ‘Black Budget.’ The other half of the funding was from our dear friends in China. That’s right, the theft of The US Presidency and multiple other elections worldwide was co-financed by our own CIA – Top Venezuelan engineers who reportedly designed and executed multiple foreign based election frauds in America using Dominion and Smartmatic systems are in America under U.S. gov protection and have provided sworn testimony. They include an engineer who personally helped illegally install Joe Biden as President in 2020 – These engineers are also joined by General Hugo Carvjal, former Head of Venezuelan Intelligence, now in jail in New York (his cellmate is Diddy Combs) and he is cooperating with Fed authorities (see below) – Another Venezuelan General has now also joined General Carvjal in providing 1st person testimony – Official state and court adduced evidence of 2020 election fraud has been compiled for every one of the battleground states. Cowardice and corruption within the American judiciary has scuttled any real progress – Georgia corruption came into better focus last month as Fulton County admitted not following the law concerning over 300K ‘votes’ and then their most corrupt state judge agreed to unseal the 2020 ‘warehouse ballots,’ many of which are officially sworn to be likely counterfeit. What a sad crooked bunch – The DOJ is suing multiple states to require compliance with Federal election laws including HAVA – Georgia is among them – and @AAGDhillon is leading the charge – President Trump pardoned Tina Peters but corrupt Colorado officials refuse to release her from prison. Colorado wants to litigate her role as a Federal officer in their elections while her health declines due to their horrible conditions. Colorado officials are going to pay dearly – An American Armada, the likes of which hasn’t been assembled in this century, sits off the coast of U.S. Election Theft Central. They are resting up after the historic strike extraction of Maduro. They will not idle long. The President promises to clean out all the cartel del Soles thugs and return Venezuela to democratic self governance. A big job but essential to keeping America safe and its enemies out of our hemisphere and out of our elections.  https://twitter.com/WarClandestine/status/2007981628648206368?s=20   which gave hope to the low-morale Continental Army and boosted enlistment, and eventually led to victory. I think Trump and the US MIL were sending a message. Now is when we start winning the war against the Deep State. I think we have graduated into a new phase of the operation. https://twitter.com/WarClandestine/status/2007924998703366560?s=20   necessary for what comes later, when Trump invokes the Insurrection Act and sends US MIL to cities nationwide. If the US MIL are going to conduct mass arrests, the public will need to trust them and trust Trump. So for those asking why Trump is arresting Maduro before arresting treasonous actors in the US, I think there is method to the madness. The high-profile US arrests will likely be towards the end, after more of the public are fully bought in on the operation to dismantle the Deep State. Arresting people is the easy part. Convincing billions of people that high-profile individuals, including former heads of state, need to be arrested… that's the tricky part. https://twitter.com/RapidResponse47/status/2008033626294792665?s=20 https://twitter.com/USDOL/status/2007933111729021305?s=20 (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:13499335648425062,size:[0, 0],id:"ld-7164-1323"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.customads.co/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");

Optimal Relationships Daily
2858: [Part 2] When Your Partner Isn't Sure They Want a Future With You by Tonya Lester on Relationship Uncertainty

Optimal Relationships Daily

Play Episode Listen Later Jan 5, 2026 6:28


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 2858: Tonya Lester explores the painful uncertainty of being with someone who can't commit, offering a grounded, compassionate roadmap for reclaiming your voice and agency. Listeners will gain clarity on when to wait, when to walk, and how to protect their self-worth in relationships that feel stuck in limbo. Read along with the original article(s) here: https://www.tonyalester.com/blog//when-your-partner-isnt-sure-they-want-a-future-with-you Quotes to ponder: "Everyone should have a bottom line regarding what they want from a partner in a relationship." "It's important to note that a healthily attached person can become anxiously attached if they spend too long with an avoidant partner." "Be realistic. Is the person in front of you who you really want? Or are you waiting for them to conform to your fantasy of who they could be?" Episode references: Eat, Pray, Love: https://www.amazon.com/Eat-Pray-Love-Everything-Indonesia/dp/0143038419 Attached: https://www.amazon.com/Attached-Science-Adult-Attachment-YouFind/dp/1585429139

Wizard of Ads
85 Cents an Hour

Wizard of Ads

Play Episode Listen Later Jan 5, 2026 5:32


In 1958, Paul made 85 cents an hour working in a limestone quarry in Oklahoma.He was a man of character, integrity, and kindness.He was quiet, smiled a lot, and was a wonderful listener.Paul's humility, kindness, and confidence gave him dignity and authority in the eyes of everyone who knew him.He was happily married and had three little girls. On the day his fourth little girl was born he walked into a storm that could easily have ripped him apart.It was with great heaviness of heart that Doctor Franklin told him that there was a problem with the Rh factor in the little girl's blood and that she was almost certainly going to die.She was barely, barely, barely hanging on.With tears in his eyes Doctor Franklin told him, “And your wife is also fading fast.” Doctor Franklin dropped his chin to his chest as teardrops splashed on his shoes.An ambulance rushed both mother and daughter to a larger hospital in a larger town.Paul was all alone with eighty-five cents an hour and three little girls.Several hours later, a happy and rejoicing Doc Franklin told Paul that both mother and daughter were going to live!They were going to live.The medical bill was more than a thousand dollars and there was no insurance; just a husband and wife and four little girls and 85 cents an hour.Being a man of integrity, Paul went to see Doc Franklin the next day to set up a payment plan for paying that thousand-dollar medical bill.Doc Franklin said, “What medical bill?”Paul was confused, and it showed on his face.Old Doctor Franklin spoke plainly,“There is no medical bill. You do not owe any money. Just be a good father to those girls.”“Just be a good father to those girls.”I can testify that he was a good father to those girls. I met Paul Compton when I was 14 years old and in love with his daughter, the one who nearly died on the day she was born.Here's how I met him.One week prior to beginning my freshman year in high school, my mother received an invitation to come to an open house at the school on a Tuesday night where she could meet Coach Jerry Meeks, my home room teacher.He taught Oklahoma History, of course.Attached to that letter was a list of all the other students who would be in my first-hour class.I saw that Pennie Compton was going to be in that class with me. She knew who I was, but we had never actually met. This would be the first time that we would be in class together.Mom couldn't go that night, which suited me fine. I had a plan of my own.I was the first person to arrive. The parking lot was empty except for the cars of the teachers. I met Coach Meeks, then took a seat at a desk in the back row. About 30 minutes later, a tall man came walking in with his wife and the girl that I knew I was going to marry.After Paul and his wife exchanged pleasantries with Coach Meeks, I walked up to him, introduced myself, then shook his hand as I smiled and said,“My name is Roy Williams and you're going to be seeing a lot of me.”Last week Princess Pennie and I celebrated our 49th wedding anniversary.Paul never criticized me or gave me advice unless I asked for it. But when I did ask for it, he would tell what he thought, along with some true stories from his own life that explained why he believed what he believed.He always spoke slowly and gave me his full attention. His confidence in me was a great encouragement.In all the decades that I knew Paul Compton, I never saw him raise his head from prayer without having tears on his cheeks. When Paul talked to God, you knew that God was listening.I always looked forward to

Sales Secrets From The Top 1%
Selling Feels Hard When You're Attached to the Outcome | #1302

Sales Secrets From The Top 1%

Play Episode Listen Later Jan 4, 2026 3:10


Many sellers struggle not because of skill gaps, but because attachment leaks into their conversations. In this episode, Brandon breaks down how emotional pressure shows up subtly, why buyers sense it immediately, and how detachment creates safety rather than distance.You'll learn how to care deeply without needing the deal, why neutral language signals confidence, and how shifting from outcome-focused to clarity-focused selling changes everything. This episode reframes selling as emotional regulation, and explains why the calmest seller often wins.

those F%#KING fangirls
146 | OUR TOP 25 Moments of 2025

those F%#KING fangirls

Play Episode Listen Later Jan 2, 2026 139:11


Christine Riccio & Natasha Polis talk all things nerdy in the book, tv, movie, pop culture, fandoms, and how they integrate into their adult lives. Today they're going through their top 25 moments of 2025! The last episode of 2025!!  PLUS they chat the Taylor Swift doc and more! After this episode we'll be on hiatus for two weeks, then, we'll be back for season 4!! Today in Fangirl Tea Time: Join Christine and Natasha for more stories about their recent life escapades. Support the pod by joining the Forking Fangirls Patreon community: http://patreon.com/thoseforkingfangirls  MAIN DISCUSSION STARTS AT: 25:00 Follow the visual show on our Youtube: http://youtube.com/@thoseforkingfangirls   Preorder Christine's new book THIRTY, FLIRTY, & FOREVER ALONE: https://www.amazon.com/dp/1662532156 THE THIRTY FLIRTY AND FOREVER ALONE BOOK TOUR: Brooklyn, New York - January 6th @ Word Bookstore - 7PM in conversation with Alexandria Bellefleur  RSVP: https://withfriends.co/event/27177316/christine_riccio Collingswood, New Jersey / Philly area - January 7th Kiss & Tale Romance Bookshop - 6pm in conversation with Hannah Nicole Maeher TICKETS: https://kisstalebookshop.com/events/3873820260107 Memphis, TN - January 9th Novel Memphis - 6pm in conversation with Kelsey Impicciche 387 Perkins Ext., Memphis, TN 38117 RSVP: https://novelmemphis.com/event/2026-01-09/christine-riccio-w-kelsey-impicciche-thirty-flirty-and-forever-alone Austin, TX - January 11th Lark & Owl Booksellers - 7pm In conversation with Natasha Polis  205 6th St Suite 101, Georgetown, TX 78626 TICKETS: https://www.larkandowlbooksellers.com/products/christine-riccio-author-event-thirty-flirty-and-forever-alone JANUARY - 14 - Culver City, CA The Ripped Bodice - 7pm in conversation with Olivie Blake/Alexene Farol Follmuth 3806 Main St, Culver City, CA 90232 TICKETS: https://www.therippedbodice.com/events-and-tickets  Add Thirty Flirty & Forever Alone on Goodreads: https://www.goodreads.com/book/show/230393104-thirty-flirty-and-forever-alone Check out Natasha's sewing classes: https://www.natashapolis.com/ Join our patron to get 10 dollars off the classes! Website: https://thoseforkingfangirls.com/  Email us feedback: thoseforkingfangirls@gmail.com  Instagram: https://www.instagram.com/thoseforkingfangirls/ Twitter: https://twitter.com/forkfangirlspod  TikTok: https://www.tiktok.com/@thoseforkingfangirls Get Christine's novel Attached at the Hip: https://a.co/d/grmPeVy  Check out the Selkie Collection and get 10% off your order with code TASHAPOLIS https://selkiecollection.com/collections/all

Daily Emunah Podcast - Daily Emunah By Rabbi David Ashear

This week's parashah, Vayechi, is known as a parashah setumah —a closed parashah—because there is no space in the Torah between the end of Vayigash and the beginning of Vayechi. Rashi explains that one reason for this is that the eyes and hearts of the Jewish people became "closed" when Yaakov Avinu passed away, from the pain and pressure of the bondage. The mefarshim ask a powerful question. Rashi himself writes elsewhere that the actual slavery in Mitzrayim did not begin until after the last of the Shevatim passed away. If so, how can Rashi say that immediately after Yaakov's passing their hearts became closed because of the slavery? The Be'er HaParashah, citing the Ma'agalei Tzedek, explains this beautifully. We know from other pesukim that the Shevatim originally came down to Mitzrayim only because of the famine. Once Yaakov passed away, and they went back to Eretz Yisrael to bury him in the Me'arat HaMachpelah, the famine was already long over. Logically, they should have stayed in Eretz Yisrael. Yaakov himself had been commanded to go down to Mitzrayim, but his children had not been given such a command. So why did they return to Mitzrayim? The answer must be that Hashem closed their eyes and hearts from even considering the possibility of staying in Eretz Yisrael. Hashem wanted the decree of slavery to unfold, and therefore He guided them back to Mitzrayim in a way that felt natural and unquestioned. It didn't have to make sense to them, because it was Hashem leading them where they needed to be. This, explains the Ma'agalei Tzedek, is what Rashi means when he says that their eyes and hearts became closed. Not that they were already enslaved, but that Hashem closed off certain lines of thought so that the process He willed could move forward. This is a lesson that repeats itself constantly in our lives. Many times, years later, a person looks back and asks himself: Why did I choose that path? From where I stand now, I never would have made that decision. The answer is often that Hashem wanted him led in that direction. Hashem guides us not only through clear signs, but through closed doors, missed opportunities, delays, and distractions. What looks like nature is pure hashgacha. Rabbi Elimelech Biderman shared a remarkable story that illustrates this idea in a very tangible way. In Brooklyn, there is a man named Rabbi Yosef who learns regularly with another Jew who, until about a year ago, was very far from Judaism. They learn together by phone several times a week, and slowly, with siyata d'Shmaya, this man has been growing in his observance. A few weeks ago, on Erev Chanukah, Rabbi Yosef discovered that his learning partner had put on tefillin only once in his entire life. Rabbi Yosef spoke to him about the importance of the mitzvah and encouraged him to start wearing tefillin daily. The man replied that he didn't own his own tefillin. He only had an inherited pair—small tefillin of Rashi and Rabbeinu Tam, as was his family custom to wear both together. But the straps had faded from black to white. Rabbi Yosef immediately understood that the tefillin were almost certainly pasul. At the same time, he knew that this man was not yet ready to hear that he needed to spend a large sum of money on new tefillin. So Rabbi Yosef decided, quietly, that he would try to raise the money himself and buy him proper tefillin according to his custom. The very next day, Rabbi Yosef woke up early, as usual, and learned with a different chavruta by phone at six in the morning. After that, however, a series of unusual delays began. One thing after another went wrong, and he missed his regular minyan. He went to a different shul on the same block, but again encountered obstacles and could not pray with that minyan either. Finally, he walked to another shul a block away, where the minyan was much later than the time he normally prays. As soon as he entered the shul, his eyes were drawn to a small tefillin bag. Attached to it was a sign that read: "Anyone who needs this may take it." He opened the bag and could hardly believe what he saw. Inside were two small pairs of tefillin—Rashi and Rabbeinu Tam. He sent them to a sofer to be checked, and they were found to be completely kosher. At that moment, everything became clear. All the delays, all the missed minyanim, all the frustrations of that morning were not accidents. They were Hashem closing one door after another in order to lead Rabbi Yosef precisely to the place where those tefillin were waiting. Finding tefillin left for the taking is rare enough. Finding two small, kosher pairs of Rashi and Rabbeinu Tam was nothing short of astonishing. It was as if Hashem had prepared them in advance, custom-made for this man, and simply needed Rabbi Yosef to arrive at the right place at the right time. This is the message of the parashah. Hashem is constantly leading us—sometimes by opening our eyes, and sometimes by closing them. Our job is not always to understand in the moment, but to trust that every delay, every detour, and every missed plan is part of a precise Divine guidance. Shabbat Shalom.

Weird Darkness: Stories of the Paranormal, Supernatural, Legends, Lore, Mysterious, Macabre, Unsolved
The Perfect Christmas Tree Has a Curse Attached

Weird Darkness: Stories of the Paranormal, Supernatural, Legends, Lore, Mysterious, Macabre, Unsolved

Play Episode Listen Later Dec 22, 2025 29:25 Transcription Available


When Mrs. Hostutler finally found the perfect Christmas tree, she never stopped to wonder why it was growing out of a grave.Episode 9 of 12 in the #12NightmaresOfXmas series!In this episode: “The Perfect Christmas Tree”, “A Rose For Her Hair”, “Eternal Love”, “Guides In The Snow”SOURCES AND ESSENTIAL WEB LINKS…All stories in this episode are from the book, “The Spirits of Christmas: The Dark Side of the Holidays” by Sylvia Shults: https://amzn.to/3uT2vMA= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =Weird Darkness theme by Alibi Music Library. Background music provided by Alibi Music Library, EpidemicSound and/or StoryBlocks with paid license. Music from Shadows Symphony (https://tinyurl.com/yyrv987t), Midnight Syndicate (http://amzn.to/2BYCoXZ) Kevin MacLeod (https://tinyurl.com/y2v7fgbu), Tony Longworth (https://tinyurl.com/y2nhnbt7), and Nicolas Gasparini (https://tinyurl.com/lnqpfs8) is used with permission of the artists.= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =(Over time links seen above may become invalid, disappear, or have different content. I always make sure to give authors credit for the material I use whenever possible. If I somehow overlooked doing so for a story, or if a credit is incorrect, please let me know and I will rectify it in these show notes immediately. Some links included above may benefit me financially through qualifying purchases.)= = = = = = = = = = = = = = = = = = = = = = = = = = = = = ="I have come into the world as a light, so that no one who believes in me should stay in darkness." — John 12:46= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =WeirdDarkness® is a registered trademark. Copyright ©2023, Weird Darkness.https://weirddarkness.com/PerfectChristmasTree#WeirdDarkness, #ChristmasGhostStory, #ScaryChristmas, #GhostStory, #HolidayHorror, #CreepyChristmas, #Paranormal, #TrueScaryStories, #HauntedHolidays, #DarkChristmas

Outlook
I broke the most important rule: don't get attached

Outlook

Play Episode Listen Later Dec 22, 2025 41:06


Swedish youth worker Nicolas Lunabba had one strict rule: never get attached to the kids you help. Then 13-year-old Elijah moved in — and turned his mentor's flat into a home.In Malmö, Sweden, where poverty and violence shaped young lives, detachment was Nicolas' survival strategy. Then he met Elijah, an eight year old with a mohawk, a basketball under his arm, and a fearless, sometimes dangerous streak. They bonded over basketball and five years later, Elijah arrived at Nicolas' flat and made a home on his sofa. He borrowed his clothes, asked him to read aloud from a 3,600-page novel, and slowly cracked the emotional armour of a man who had spent years keeping people at arm's length. What began as mentorship became an unconventional and powerful bond that changed both their lives in extraordinary ways. Nicolas has written a memoir, Will You Care If I Die, and a Swedish film of the same name is currently in production.Presenter: Jo Fidgen Producer: Tom Harding Assinder Lives Less Ordinary is a podcast from the BBC World Service that brings you the most incredible true stories from around the world. Each episode a guest shares their most dramatic, moving, personal story. Listen for unbelievable twists, mysteries uncovered, and inspiring journeys - spanning the entire human experience. Step into someone else's life and expect the unexpected. Got a story to tell? Send an email to liveslessordinary@bbc.co.uk or message us via WhatsApp: 0044 330 678 2784 You can read our privacy notice here: https://www.bbc.co.uk/programmes/articles/5YD3hBqmw26B8WMHt6GkQxG/lives-less-ordinary-privacy-notice