Podcasts about Regression

  • 2,080PODCASTS
  • 3,464EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 2, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Regression

Show all podcasts related to regression

Latest podcast episodes about Regression

Personality Hacker Podcast
When Personality Growth Feels Like Regression | Podcast 630

Personality Hacker Podcast

Play Episode Listen Later Mar 2, 2026 58:00


Explore Your Personality: https://PersonalityHacker.com  Personal growth often feels worse before it feels better, especially when awareness forces you to confront the systems, habits, and relationships you helped create. Joel and Antonia unpack how to measure "improvement," why real development is frequently painful and undignified, and how personality type can point you toward the exact areas you have been avoiding. They also explore the phases of growth, from responsibility and tool selection to implementation, including the pushback you can face from your environment and the slow timeline of rewiring old patterns.

Career Blindspot
Revenge of Sleep Training

Career Blindspot

Play Episode Listen Later Mar 2, 2026 8:28


Why aren't our babies sleeping? That's the question. But this episode isn't just about sleep. It's about leadership, ownership, energy, and what happens when a system that once worked starts slipping after a new variable is introduced. Sleep training is project management. Regression is iteration. Exhaustion changes your standards. We talk about: ·       Why knowing what to do isn't the same as doing it ·       How depletion impacts decision making ·       Ownership vs resentment ·       Redefining "done" when circumstances change ·       Why you shouldn't make system decisions when you're exhausted Whether you're leading a household, a team, or yourself, the lesson is the same: Step back. Lay it out. Define the next iteration. Then execute — rested. Embrace the blind spot.

Road Warrior Radio with Chris Hinkley
Road Warrior Radio with Chris Hinkley, February 27, 2026 Hour 1

Road Warrior Radio with Chris Hinkley

Play Episode Listen Later Feb 27, 2026 60:01


Tell me if this makes sense… We live in a world today characterized by a fetishized pornographic addiction to rape. If it were not so, Law & Order: SVU wouldn’t have made it past a single season – let alone, into SYNdication for nearly 30 years…! I loathe Adorno and the CULTural Marxists who SYNthesized (read: weaponized) Marx and Freud to the general detriment of mankind, beginning with the ‘West’. But, he raised some legit points, as often the baddies do. It’s their SOLUTIONS we all need be wary of. For nigh on 100 years, we’ve basked in the jaundiced glow of the Frankfurt School, as legions of university students continue having their minds and spirits poisoned in the name of ‘Progress’. See also the ancient Roman Collegium, a concept dating back to (at least) the days of Plato – who, incidentally, literally wrote the book on The Republic. I digress… In Adorno’s “Fetish-character” essay, he states, a fetish is a substitute object of desire.[1] I would submit that in the latent undercurrent of this Nietzschean ‘power-evolving universe’ of today’s America; men and women, by and large, secretly harbor a craven desire for rape. It sounds crazy! Until one considers the popularity of Law & Order: SVU for the last 27 years. America is Kung-Fu LARPing, with each new iteration of the ‘fetish substitute object of desire’ further blurring the lines between fantasy and reality (schizoaffective disorder) as we creep ever closer to the Chaos Magick of bringing these secret desires to life. But, beware; LARPing has consequences.[2] The Epstein Saga has been publicly ongoing for 2+ decades. More than a thousand witnesses have come forward – including dozens who’ve accused Trump (E. Jean Carroll) – and yet, only Epstein and Maxwell have been ‘brought to justice’. Speaking of ‘justice’, Thomas Massie probably said it best:[3] Congress created the Department of Justice, Congress funds the Department of Justice, and Congress is responsible for the oversight of the Department of Justice. When will we see justice? I’ll tell you what I’ve not seen. I’ve not seen any arrests from the revelations in the Epstein Files – over 3 million documents describing horrible things, describing unspeakable things, much of it redacted. Over two dozen people have resigned; CEOS, members of government, worldwide. But, I haven’t seen any arrests or investigations here in the United States, from this Department of Justice. Prince Andrew, Duke of York, who has since been stripped of his royalty, his royal titles, due to his affiliation with Jeffrey Epstein, has been arrested. Peter Mandelson, who previously served as UK’s Ambassador to the United States, resigned in disgrace from United Kingdom’s House of Lords and the Labor Party, and he’s been arrested. Former Prime Minister of Norway Thorbjorn Jagland has been charged. But, we don’t see any charges, arrests, or investigations in the United States. What do we see? We see our FBI Director celebrating in the locker room at the Olympics overseas. It’s fine to be proud of this country. But, we should be proud of this country because we have a system of justice that works. And yet we do not. … We need justice. We want the Department of Justice to get to work, and that’s what they need to do – now. The Trump (45/47) DOJ is unwilling to rat itself out – and so are the other 77+ million co-conspirators… And then there’s the 77 million co-conspirators who voted for Epstein’s best friend Trump as many as three times, knowing he’d been accused of sexual assault by dozens of women, and even after he was found liable for sexually assaulting E. Jean Carroll. For 77 million men and women it was not a dealbreaker! He rapes, but he saves. He saves more than he rapes … but he probably does rape.[4] Considering the aforementioned, what would be crazy is not acknowledging America’s fetishized pornographic addiction to rape – which is precisely what we’re doing. We are gaslighting ourselves at this point, as we turn a blind eye to our own culpability. After all – on the eve of America’s 250th Anniversary of Independence – wasn’t this always to be a government of, by, and for The People…? 18 For the wrath of God is revealed from heaven against all ungodliness and unrighteousness of men, who hold the truth in unrighteousness; …21 Because that, when they knew God, they glorified [him] not as God, neither were thankful; but became vain in their imaginations, and their foolish heart was darkened.22 Professing themselves to be wise, they became fools, …24 Wherefore God also gave them up to uncleanness through the lusts of their own hearts, to dishonour their own bodies between themselves: …26 For this cause God gave them up unto vile affections: for even their women did change the natural use into that which is against nature:27 And likewise also the men, leaving the natural use of the woman, burned in their lust one toward another; men with men working that which is unseemly, and receiving in themselves that recompence of their error which was meet.28 And even as they did not like to retain God in [their] knowledge, God gave them over to a reprobate mind, to do those things which are not convenient;29 Being filled with all unrighteousness, fornication, wickedness, covetousness, maliciousness; full of envy, murder, debate, deceit, malignity; whisperers,30 Backbiters, haters of God, despiteful, proud, boasters, inventors of evil things, disobedient to parents,31 Without understanding, covenantbreakers, without natural affection, implacable, unmerciful:32 Who knowing the judgment of God, that they which commit such things are worthy of death, not only do the same, but have pleasure in them that do them. — Romans 1:18, 21–22, 24, 26–32 KJV 4 Rejoice in the Lord alway: [and] again I say, Rejoice.5 Let your moderation be known unto all men. The Lord [is] at hand.6 Be careful for nothing; but in every thing by prayer and supplication with thanksgiving let your requests be made known unto God.7 And the peace of God, which passeth all understanding, shall keep your hearts and minds through Christ Jesus. 8 Finally, brethren, whatsoever things are true, whatsoever things [are] honest, whatsoever things [are] just, whatsoever things [are] pure, whatsoever things [are] lovely, whatsoever things [are] of good report; if [there be] any virtue, and if [there be] any praise, think on these things. — Philippians 4:4–8 KJV #Links Clips [1:58] Etymology (the origins of words) was taken out of schools in the early 1900’s for a reason. (See also entry below) [5:39] Demons in the Headlines EXPOSED: The War for Power and Souls in D.C. | Strange Encounters | Ep 29 – YouTube (See also Blaze Media article below) [3:15] Rep. Massie Asks, “When Will We See Justice” Following Latest Epstein Files Revelations (See also C-SPAN Congressional Chronicle entry below[3:1]) Previous RWR broadcasts referenced 2026-02-25 2026-02-26 Proof of America’s fetishized pornographic addiction to rape Amanda Seyfried Wore A “Prosthetic [redacted]” For ‘Testament Of Ann Lee’ Amanda Seyfried will go to extreme lengths for a film role — especially when it comes to feeling comfortable during a nude scene. The actor wore what she described as a “prosthetic [redacted]” in her recent movie The Testament of Ann Lee, as she revealed in a Feb. 25 interview with BBC’s The Scott Mills Breakfast Show. “This movie, it needed to be graphic, so, like, I had a prosthetic [redacted],” she said in a clip posted to Instagram, which understandably perplexed Mills himself. When pressed for more details, she surprisingly had a rave review about the experience. “It was cool. It was exciting.” Seyfried plays the real-life Ann Lee, a Christian woman in 18th-century Great Britain who viewed herself as a representative of God and eventually founded a religious sect called Shakers, with the film capturing her group’s move across the pond to New York during the Colonial era. Son of megachurch pastor sentenced after horrific materials found at home ‘among worst investigators have seen’ An Indiana megachurch once known for preaching purity and sexual morality has found itself at the center of a scandal that has shaken a congregation, rattled political allies, and ended with a six-year prison sentence. Jonathan Peternel, 24, of Pendleton, was sentenced Friday after pleading guilty in January to one Level 4 felony count of child exploitation and three felony counts of possession of child sexual abuse material. The case drew intense public scrutiny not only because of the disturbing evidence uncovered by investigators, but because his father, Nathan Peternel, remains listed as lead pastor at Life Church and is a longtime mentor and close associate of Indiana Lt. Gov. Micah Beckwith. Why Viewers Say You Should Watch ‘Nymphomaniac’ Alone Due to Its Graphic Scenes Both volumes of Lars von Trier’s Nymphomaniac are streaming on Netflix in the U.S., and its return to an easy, familiar platform has revived a warning that has followed the film since 2013: ‘Watch this one by yourself.‘ … So why does this movie come with a warning like that? The movie’s name actually answers that on its own. The term nymphomania is used to classify someone who has an uncontrollable compulsion toward sex, and that is exactly what the film follows across 2 volumes and 8 chapters. It opens with a woman named Joe, found beaten in an alley. A man named Seligman brings her home, and she begins telling him the story of her life from her earliest sexual memories through decades of escalating need. Von Trier was telling the story of a woman whose entire life is shaped by a compulsion she cannot control. … The discomfort the audience feels isn’t incidental. It’s the mechanism. Von Trier built the film so that watching it puts you closer to Joe’s experience than any non-explicit version ever could. The surface reading is addiction… What Joe is actually chasing is not sex but connection. Every encounter she describes to Seligman moves her further from other people rather than closer to them. Sex becomes the thing she reaches for because the thing she actually needs keeps slipping out of range. That distance between the act and the need behind it is where von Trier plants the real story. The compulsion is real, but the loneliness underneath it is what he keeps circling back to. He called this technique “Digressionism,” a term he coined to describe a storytelling style that deliberately wanders away from its own plot. He cited Marcel Proust as an influence. Nymphomaniac is the final film in what von Trier and critics call the Depression Trilogy. Following Antichrist in 2009 and Melancholia in 2011. After years infiltrating child exploitation rings, expert reveals an even DARKER American underworld | Blaze Media Demons in the Headlines EXPOSED: The War for Power and Souls in D.C. | Strange Encounters | Ep 29 – YouTube [31:30–33:26] Back to the politics piece; everybody within politics – even if they disagree with exploitation or whatever – they show partiality. And, I believe it’s, is it second Peter? … It says, ‘where partiality exists, exists every form of deceit and evil’. We can look it up … but I think that’s it. But, where partiality exists, exists all forms of evil. ***[Did he mean this passage?]For where envying and strife [is], there [is] confusion and every evil work. But the wisdom that is from above is first pure, then peaceable, gentle, [and] easy to be intreated, full of mercy and good fruits, without partiality, and without hypocrisy. – James 3:16–17 KJV*** And, what is happening in our political world that I’ve that I’ve seen now is; you have career politicians – even if they claim to be Christians – they sell access. And, it might be access to conservative organizations. But, they sell access – and they’re partial to donors. … they’re unbelievably partial. And, they’re partial to their ‘club’, as opposed to the people they’re elected to represent. And, you have a bureaucracy that’s in place, and you have these elitists that are in place, that think that they can buy – because they have been able to buy your position – buy you, buy access to you, or buy access to somebody else, and ‘own’ – in this case, a US Senator, what I’m running for. But, it’s across the board for everything; Congressmen, even the President … Everything’s for sale. And, it’s ‘access’ that they’re selling, right? And, that’s the thing that stood out to me the most; partiality. More proof / Trump-Epstein Saga DOJ’s Epstein Files Screwups Get Worse With Unredacted Nudes and Images of Kids The Justice Department is under fire after newly released Jeffrey Epstein case materials reportedly included unredacted nude images and photos involving minors. Analysis by CNN uncovered nearly 100 explicit pictures of two naked young women on a beach, the news outlet reported. The materials also included photos showing a young girl kissing Epstein on the cheek. At least one unredacted image depicted Epstein alongside a nude female, and additional selfie-style nude photos of at least two other unidentified females were also published, with their ages unclear, according to CNN. Under the Epstein Files Transparency Act, which Congress passed and President Trump signed in late November, the DOJ is obligated to omit sexually explicit imagery and anything that might identify victims. The images have now been redacted. DOJ Gives Shameless Reason for Hiding Photo of Howard Lutnick and Jeffrey Epstein Donald Trump’s White House Chief of Staff Susie Wiles is ‘Shocked’ the FBI Dared to Come for Her ‘Uncle Jeff’ shifts focus on Erika Kirk grooming allegations post-Epstein file release – We Got This Covered Most Americans in new survey dispute Donald Trump’s economic boom claim CBS’s new hire appeared 1,700 times in Epstein’s files, and John Oliver just exposed his disturbing emails – We Got This Covered Epstein Had Close Ties to Prosecutor Behind Key Provision of Plea Deal | The New Republic Turns out ICE is just a bunch of scared widdle guys Fear as senator discovers staggering true amount Trump spent on arming ICE – Raw Story Congressional Chronicle – Members of Congress, Hearings and More | C-SPAN.org[3:2] [standalone clip] Rep. Massie Asks, "When Will We See Justice" Following Latest Epstein Files Revelations | Video | C-SPAN.org The Purpose Of the System Is What It Does (POSIWID) Millions at Risk as Android Mental Health Apps Expose Sensitive Data US defense secrets sold to Russians for millions in crypto – Newsweek Tucker Carlson pushes DNA tests for Jews, ‘Khazar’ theory | The Jerusalem Post The largely discredited theory states that Ashkenazi Jews are genetically descended from a Turkic minority that converted to Judaism in the Middle Ages rather than from the 12 tribes of Israel. During Tucker Carlson’s interview last week with Mike Huckabee, the US ambassador to Israel, both men made considerable waves with their takes on history and theology. Anthropic says it will not accede to Pentagon demands as deadline looms | AP News Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.” From the Wayback. Why – and why now – is Daily Mail breaking these stories out of the dust bin…? Secret mind-control techniques using TVs revealed in disturbing patent | Daily Mail Online Declassified CIA memo reveals plan to turn citizens into unwitting assassins | Daily Mail Online On the lighter / brighter side… Why age is an advantage for starting a business – Fast Company Sardonic levity, as Rome burns… Images That Might Indicate Society is in Decline | eBaum’s World Caller Dialogue David – WI Feminism dating back to early 1800s (CH: Owenism – Wikipedia) Valerie Solanas, SCUM Manifesto – Wikipedia Friedrich Nietzsche, Beyond Good and Evil (1886)[5] Insanity in individuals is something rare–but in groups, parties, nations, and epochs it is the rule. Bitchute: Etymology (the origins of words) was taken out of schools in the early 1900’s for a reason. Also on YouTube: Etymology ~ The Origins Of Words Was Taken Out Of Schools In The Early 1900s For A Reason – YouTube James – Vancouver The Scribner-Bantam English dictionary : Williams, Edwin B. (Edwin Bucher), 1891-1975 : Free Download, Borrow, and Streaming : Internet Archive #Footnotes Clowney, David W. “On the Fetish-Character in Music and the Regression of Listening” Reading Notes for the 1938 Essay by Theodor Adorno. 3 Nov. 2005, p. 6, users.rowan.edu/~clowney/aesthetics/ReadingGuides/Adorno.ppt. Accessed 26 Feb. 2026. More (e.g., “course guides” at Clowney’s aesthetics page: users.rowan.edu/~clowney/aesthetics/. ︎ Berenson, Alex. “On the Dangers of Cosplay.” Substack.com, Unreported Truths, 11 Jan. 2026, alexberenson.substack.com/p/on-the-dangers-of-cosplay. Accessed 26 Feb. 2026. ︎ C-SPAN. “Congressional Chronicle – Members of Congress, Hearings and More.” C-SPAN.org, C-SPAN, 24 Feb. 2026, www.c-span.org/congress/?chamber=house&date=2026-02-24. Accessed 26 Feb. 2026. Click on “Speakers” tab, select Thomas Massie in “Speakers” dropdown menu, and see timestamp (10:45:03 AM) and transcript of Massie’s remarks. ︎ ︎ ︎ [Massie:] Congress created the Department of Justice, Congress funds the Department of Justice, and Congress is responsible for the oversight of the Department of Justice. When will we see justice? I’ll tell you what I’ve not seen. I’ve not seen any arrests from the revelations in the Epstein Files – over 3 million documents describing horrible things, describing unspeakable things – much of it redacted. Over two dozen people have resigned; CEOs, members of government, worldwide. But, I haven’t seen any arrests or investigations here in the United States, from this Department of Justice. Prince Andrew, Duke of York, who has since been stripped of his royalty, his royal titles, due to his affiliation with Jeffrey Epstein, has been arrested. Peter Mandelson, Who previously served as UK’s Ambassador to the United States, resigned in disgrace from United Kingdom’S House of Lords and the Labor Party, and he’s been arrested. Former Prime Minister of Norway, Thorbjorn Jagland has been charged. But, we don’t see any charges, arrests, or investigations in the United States. What do we see? We see our FBI Director celebrating in the locker room at the Olympics overseas. It’s fine to be proud of this country. But, we should be proud of this country because we have a system of justice that works. And yet we do not. Who are the men that should be investigated? I’ll name them right here. Leon Black; you don’t even have to see past the redactions to see that this man needs to be investigated. Jess Staley; accused of terrible things, it’s right there in the files. Why is he not being investigated? And, Leslie Wexner; why did the FBI list him as a co-conspirator in their own documents in a child sex trafficking case, and then tell him, according to him, that they had no questions for him? Why is that? Well, the Epstein Files Transparency Act requires the DOJ and the FBI to disclose to us their internal memos and emails about how they made those decisions, whether to prosecute or not prosecute. Yet, they have not delivered those memos. And, we still don’t have the memos and documents and emails from 2008, to explain why Jeffrey Epstein was given such a light sentence in what would have been an open and shut case of child sex trafficking, which allowed him to go back and recommit these terrible crimes, create hundreds of more victims, and ensnare so many other people in his conspiracy. Where are those documents that describe those decisions? We need justice. We want the Department of Justice to get to work, and that’s what they need to do – now! Jones, Marcie. “Gee, Look at All These Co-Conspirators in the Epstein Files That Pam Bondi and Kash Patel Say Never Existed.” Wonkette.com, Wonkette, 25 Feb. 2026, www.wonkette.com/p/gee-look-at-all-these-co-conspirators. Accessed 26 Feb. 2026. ︎ Nietzsche, Friedrich. Beyond Good and Evil. 1886. Gutenberg.org, Chapter IV. Apophthegms And Interludes, ln. 156, 4 Feb. 2013, gutenberg.org/files/4363/4363-h/4363-h.htm. Accessed 28 Feb. 2026. from The Complete Works of Friedrich Nietzsche (1909-1913). ︎

How I Learned to Love Shrimp
Seth Green on why reducing meat consumption is hard and what actually works

How I Learned to Love Shrimp

Play Episode Listen Later Feb 26, 2026 71:57 Transcription Available


This episode, I spoke with Seth Ariel Green, a research scientist at the Humane and Sustainable Food Lab at Stanford university. He recently published a meta-analysis called “Meaningfully reducing consumption of meat and animal products is an unsolved problem” (EA Forum summary here) where he reviewed over 30 papers and hundreds of interventions on the topic. Seth also writes about the science of meat reduction on his Substack, called Regression to the Meat, which I highly recommend checking out for some accessible and fun to read writing about meat reduction.We talk about why Seth is more sceptical than most about plant-based defaults, what actually works when it comes to changing people's food choices, why some research in this space is misleading and new interventions to shape diets and food choices that he is excited about. Chapters:(00:00:00) Cold intro(00:00:53) Introduction to Seth and his work(00:05:38) What are defaults and why is Seth sceptical(00:19:55) The best paper on defaults - what does it mean for advocates?(00:28:50) What does the research on meat reduction say?(00:34:25) Is 5 percentage points a small or big change in meat consumption?(00:43:20) What actually works in reducing meat consumption?(00:50:18) Potential interventions that Seth is excited aboutResources:Seth's blog Wayne Hsiung's New Yorker interviewGinn, J., & Sparkman, G. (2024). Can you default to vegan? Plant-based defaults to change dining practices on college campuses.Finkelstein et al (2012). The Oregon health insurance experiment: evidence from the first year. Jalil, A. J., Tasoff, J., & Bustamante, A. V. (2023). Low-cost climate-change informational intervention reduces meat consumption among students for 3 yearsHope, J. E., Green, S. A., Peacock, J. R., & Mathur, M. (2025). Taking a bite out of meat, or just giving fresh veggies the boot? Plant-based meats did not reduce meat purchasing in a randomized controlled menu interventionEdwards, D. M., Ondish, P., & Neff, R. (2025). Increasing meatless options to decrease meat consumptionKramer, L. A., & Landry, P. (2025). How the Sausage Is Made: Testing the Effectiveness of an Informative Video in Promoting Sustainable Food Consumption. Kenny Torella's Is it even possible to convince people to stop eating meat?Warren belasco: food, the key concepts Join our lab's seminar email list! With thanks to Tom Felbar (Ambedo Media) for amazing video and audio editing! If you enjoy the show, please leave a rating and review us - it means a lot to us!

Paranormal UK Radio Network
Trans-Dimensional Realities - Episode 6 - Past Life Regression with Christopher Sansone

Paranormal UK Radio Network

Play Episode Listen Later Feb 26, 2026 65:17 Transcription Available


Milyssa taljks with Past Life Regressionist and author Christopher Sansone about hs experiences regressing patients with past life memories and how they can influence a person in their current lifetime.Become a supporter of this podcast: https://www.spreaker.com/podcast/paranormal-uk-radio-network--4541473/support.

A Celtic State of Mind
Why Celtic's hierarchy should never accept European regression // ACSOM // A Celtic State of Mind

A Celtic State of Mind

Play Episode Listen Later Feb 26, 2026 82:42


Darek Weber Scary Stories
9 True Veil-Lifting Past Life Stories

Darek Weber Scary Stories

Play Episode Listen Later Feb 23, 2026 36:48


9 More True Past Life Stories.▾ ABOUT THIS CHANNEL ▾I collect the internet's strangest real-life glitches in the matrix, "simulation errors,” time slips, and impossible coincidences. New videos every twice a week.▾ SUBMIT YOUR STORY ▾Have a firsthand glitch or unexplainable mystery?Send it to ► DarekWeberSubmissions@gmail.com(Please include how you want me to credit you)▾ SUPPORT THE CHANNEL ▾Patreon ► https://patreon.com/DarekWeberScaryStories?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_creator&utm_content=copyLinkJoin channel memberships ► https://www.youtube.com/@DarekWeber/membershipMerch ► https://darek-weber-shop.fourthwall.com/

The 'X' Zone Radio Show
Rob McConnell Interviews - DR BRUCE GOLDBERG DDS - Dentist or Snake Oil Salesman - It's All BS!

The 'X' Zone Radio Show

Play Episode Listen Later Feb 21, 2026 49:28 Transcription Available


Bruce Goldberg is a dentist by training who later became known as a hypnotherapist and author writing about past-life regression, time travel consciousness, and metaphysical healing concepts. Discussions framed around titles such as “Dentist or Snake Oil Salesman – It's All BS!” reflect the strong controversy surrounding his claims and methods. Critics question the scientific validity of his metaphysical assertions and therapeutic approaches, while supporters view his work as exploratory consciousness research and spiritual healing practice. His career illustrates the broader public debate over regression therapy, alternative metaphysics, and the standards of evidence applied to extraordinary claims.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-x-zone-radio-tv-show--1078348/support.Please note that all XZBN radio and/or television shows are Copyright © REL-MAR McConnell Meda Company, Niagara, Ontario, Canada – www.rel-mar.com. For more Episodes of this show and all shows produced, broadcasted and syndicated from REL-MAR McConell Media Company and The 'X' Zone Broadcast Network and the 'X' Zone TV Channell, visit www.xzbn.net. For programming, distribution, and syndication inquiries, email programming@xzbn.net.We are proud to announce the we have launched TWATNews.com, launched in August 2025.TWATNews.com is an independent online news platform dedicated to uncovering the truth about Donald Trump and his ongoing influence in politics, business, and society. Unlike mainstream outlets that often sanitize, soften, or ignore stories that challenge Trump and his allies, TWATNews digs deeper to deliver hard-hitting articles, investigative features, and sharp commentary that mainstream media won't touch.These are stories and articles that you will not read anywhere else.Our mission is simple: to expose corruption, lies, and authoritarian tendencies while giving voice to the perspectives and evidence that are often marginalized or buried by corporate-controlled media

Ninkas Detox
#209: WHY GOD HASN'T HEALED YOUR CHILD FROM AUTISM YET.

Ninkas Detox

Play Episode Listen Later Feb 19, 2026 17:04


99 % of Christian autism moms miss these 3 hidden reasons why your child's symptoms like anger, meltdowns, rigidity, sleeplessness, and nonverbal autism PERSIST.TODAY ON THE PODCAST I'LL REVEAL:

SaaS Talkâ„¢ with the Metrics Brothers - Strategies, Insights, & Metrics for B2B SaaS Executive Leaders

In this extended episode, the Metrics Brothers tackle the "elephant in the room" - the SaaSpocalypse. With nearly $1 trillion in market value wiped out recently, Ray and Dave go beyond the stock market headlines to analyze the structural shifts hitting the industry.The duo breaks down the three primary drivers of the current market "carnage", including the AI Fear of Being Obsolete (FOBO), Regression to the Mean for SaaS stocks and Changing Valuation Methodologies, before diving into the newly released Aviner report, The Future of SaaS: A Fork in the Road. Using Aviner's "Red Pill vs. Blue Pill" metaphor, they debate whether SaaS companies must fundamentally pivot to "agentic" systems or accept maturity and financialize by focusing on profitability.Covered in This Episode:The SaaSpocalypse Explained: Why the stock market is currently a "rugby scrum of information" and why stock price is a measure of future expectations rather than current healthServiceNow as a Bellwether: An analysis of how a "Rule of 56" company can beat expectations and still see a 30% stock drop in a single month.FOBO (Fear of Being Obsolete): How the "revenge of build vs. buy" and the collapsing cost of coding are demoting traditional SaaS apps to mere systems of record.The Aviner Report Breakdown:Part 1: The hard data on slowing revenue growth (cut from 40% to 20%) and the "aberration" of 2019–2021 multiples.Part 2: The binary choice between embracing AI "Systems of Context" or financializing for net income.The "Architect Strategy": Ray's argument for a third path where SaaS companies coexist with AI by providing the governance and orchestration layerBuyer Sentiment vs. Market Narrative: Why 63% of software buyers believe existing vendors will be the beneficiaries of AI, contradicting the current "SaaS is dead" stock market trend.Key Metrics & Concepts MentionedRule of 40 vs. Rule of 60: How the standard for SaaS health is shiftingStock-Based Compensation (SBC): Why excluding this from profitability metrics is no longer passing the "financialization" testThe Three-Layer Taxonomy: Systems of Record, Systems of Engagement, and Systems of ContextMultiple Compression: The shift from 15x revenue multiples back to the historical 5x meanResources MentionedReport: The Future of SaaS: A Fork in the Road by Aviner Growth (Jan 2026).Book: The Reckoning by David Halberstam.Book: Profit Pools by Orit Gadiesh and James L. Gilbert.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Democracy Group
How Does Transformation Lead to Regression? | Politics in Question

The Democracy Group

Play Episode Listen Later Feb 16, 2026 55:30


Enjoying the show? Subscribe to hear the rest of Politics in Question's episodes!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

Table Setters: A Baseball Podcast
Guest: Chris Welsh (FantasyPros, Prospect One, In This League) | Finding Edges in Fantasy Baseball, Draft Season Truths, ADP Myths, Regression Calls & Pitcher Risk | 140

Table Setters: A Baseball Podcast

Play Episode Listen Later Feb 12, 2026 73:50


Welcome to Episode 140 of Tablesetters. Devin is joined by Chris Welsh, Host and Analyst for FantasyPros and BettingPros, co-owner of In This League, and the creator of The Prospect One Podcast. With fantasy baseball draft season fully underway, this conversation is about stripping things back to what actually matters. Chris joins the show to discuss how he's approaching drafts in 2026, how preparation has changed in an era of constant information, and where fantasy players can still gain real advantages despite ADP, rankings, and projections being more accessible than ever. The episode opens with Chris reflecting on a recent trip to New Orleans before pivoting into where he's at right now in draft season—whether he's already drafting in serious leagues or still focused on mocks, and what those high-stakes leagues actually look like in terms of format, depth, and risk tolerance. From there, the discussion moves into player evaluation and draft dynamics. We start with regression candidates, using Cal Raleigh as a focal point at catcher—how much regression to expect, how positional value factors into his lofty NFBC ADP, and whether taking a catcher that early is a bet worth making. Chris also shares additional players he believes may be overdrafted relative to expectation. We then dig into Ben Rice, his eye-popping underlying metrics, and how roster construction—specifically the Yankees' decision to re-sign Paul Goldschmidt—could impact Rice's fantasy value and playing time outlook in 2026. The conversation expands to players who have changed teams and whether those moves meaningfully raise their fantasy ceilings. From there, Devin and Chris tackle the downside of ADP itself—how it can make drafts feel rigid and formulaic—and identify the players Chris is willing to reach for anyway, trusting conviction over consensus. On the flip side, Chris revisits the idea of “disappointment” in fantasy terms—players whose production may not justify where they're being drafted. We also touch on Nick Kurtz's aggressive ADP and whether the price tag makes sense. Pitching strategy becomes the next focus, including how the Dodgers handle their arms, whether that caps fantasy value, and why Chris is hesitant to invest early picks in elite pitchers like Tarik Skubal, Paul Skenes, and Garrett Crochet despite their upside. Later, Chris explains why the term “sleeper” has become harder to define in modern fantasy baseball, offers his favorite sleeper for the season, and highlights his favorite “if he stays healthy” player to target. We close with a deeper look at Geraldo Perdomo's puzzling ADP despite elite underlying production, a broader discussion of shortstop as one of the deepest positions in fantasy, keeper-league strategy surrounding Konnor Griffin, and a rapid-fire round of Would You Rather draft decisions featuring players with nearly identical NFBC ADPs. Follow us for more:

The Well-Mannered Mutt Podcast
Puppy Potty Training Regression: Why It Happens and How to Fix It

The Well-Mannered Mutt Podcast

Play Episode Listen Later Feb 10, 2026 12:09


Your puppy potty training was going great, and then suddenly you're dealing with puppy accidents in the house again. If you're wondering what went wrong or feeling discouraged by potty training regression, take a deep breath. This is a normal part of development, and it does not mean training is broken.   In this episode of The Well-Mannered Mutt, I'm sharing the importance of understanding why potty training regression happens and seven actionable steps you can take right now to get puppy potty training back on track using potty training positive reinforcement instead of punishment or pressure.   Some of the things I cover in this episode are: The most common developmental and environmental causes of potty training regression Why stress and routine changes often lead to puppy accidents in the house How potty training positive reinforcement supports learning during setbacks What to do when puppy potty training feels inconsistent again How supervision and structure prevent repeat puppy accidents in the house Potty training regression does not mean your puppy forgot what they learned or that you failed. It means your puppy's needs have changed, and the plan needs more support. With clear structure, better supervision, and consistent potty training positive reinforcement, most puppy potty training setbacks resolve faster than you expect. This episode will help you respond calmly, reduce puppy accidents in the house, and move forward with confidence instead of starting over.   Resources mentioned in this episode: Easy Pee-sy Puppy Potty Training HELP! My Puppy Is Biting Me!   Connect with Staci Lemke: Website - www.mannersformutts.com Instagram & Facebook @mannersformutts

Bold Beautiful Borderline
Regression: Why You Act Like A Child Around Family

Bold Beautiful Borderline

Play Episode Listen Later Feb 8, 2026 28:12


Regression is a psychological response in which a person temporarily returns to earlier patterns of thinking, feeling, or behaving, often in reaction to stress, trauma, illness, or emotional overwhelm. So I read this article about regression and why I often regression to child-like behavior around my family. This might not resonate with you - all good. But if it does I hope you learn about about your behavior and practice NOT shaming yourself for it. Send us a text message to be anonymously read and responded to! Support the showYou can find Sara on Instagram @borderlinefromhell. You can also find the podcast on IG @boldbeautifulborderline Corey Evans is the artist for the music featured. He can be found HERE Talon Abbott created the cover art. He. can be found HERE Leave us a voicemail about your thoughts or questions on the show at boldbeautifulborderline.comIf you like the show we would love if you could rate, subscribe and support us on Patreon. Patreon info here: https://www.patreon.com/boldbeautifulborderline?fan_landing=true Purchase Sara's Exploring Your Borderline Strengths Journal at https://www.amazon.com/Exploring-Your-Borderline-Strengths-Amundson/dp/B0C522Y7QT/ref=sr_1_1?crid=IGQBWJRE3CFX&keywords=exploring+your+borderline+strengths&qid=1685383771&sprefix=exploring+your+bor%2Caps%2C164&sr=8-1 For mental health supports: National Suicide Pr...

Sparksine廣東話讀書會Podcast --With Isaac
為什麼總是覺得自己不夠好?工作永遠做不完?揭開 12 個讓你人生「卡住了」的心理學效應

Sparksine廣東話讀書會Podcast --With Isaac

Play Episode Listen Later Feb 7, 2026 20:34


到底有哪些常見的心理學行為,正在潛移默化地影響我們的表現? 為什麼明明很努力,卻總覺得自己是個「冒牌貨」? 為什麼專案總是延期?為什麼週一上班特別累?今天我想跟大家分享由韓國 EBS 電視台《世界上所有的法則》製作組所撰寫的書:《為什麼我的人生這麼不順,原來讓世界運轉的法則在這樣》。我們將深入探討 12 個影響個人能力、團隊合作與時間管理的心理學效應。從「冒牌者症候群」到「湯姆索耶效應」,再到解釋工作效率的「霍夫斯塔特定律」。當你看穿了這些行為背後的定律,你就能找到改寫人生遊戲規則的鑰匙!

Paranormal UK Radio Network
Trans-Dimensional Realities - Episode 5 - Psychic Medium Patti Lehman

Paranormal UK Radio Network

Play Episode Listen Later Feb 5, 2026 57:26 Transcription Available


Milyssa talks with psychic medium Patti Lehman about her work in New Jersey.Become a supporter of this podcast: https://www.spreaker.com/podcast/paranormal-uk-radio-network--4541473/support.

Tavis Smiley
Dedrick Asante-Muhammad Joins Tavis Smiley

Tavis Smiley

Play Episode Listen Later Feb 2, 2026 16:15 Transcription Available


Dedrick Asante-Muhammad, CEO of the Joint Center for Political and Economic Studies, discusses the organization's recent report, “State of the Dream 2026: From Regression to Signs of a Black Recession.”Become a supporter of this podcast: https://www.spreaker.com/podcast/tavis-smiley--6286410/support.

The Voice of Early Childhood
A guide to potty training

The Voice of Early Childhood

Play Episode Listen Later Feb 2, 2026 45:51


The new government-backed Potty Training Guide moves away from the old 'readiness' model and promotes early, gradual learning and preparation from infancy. This article and podcast episode explore what the guidance means for families and settings, why coming out of nappies should be the final step in learning, and how practitioners and parents/carers can support confident, healthy toilet learning.   Read the article here: https://thevoiceofearlychildhood.com/a-guide-to-potty-training/   This episode is in partnership with BookedIn BookedIn is a CPD booking platform that connects organisations with verified speakers, trainers and consultants – so you can find the right fit faster, based on your brief, audience and outcomes. You can discover, compare, and manage bookings in one place – designed to help you book with more clarity and confidence. Whether you're booking CPD or are a speaker yourself, they're opening early access soon, and if you want to be first to hear when it's live, join the waiting list NOW! To find out more and sign up to the wait list visit: https://waitlist.bookedin.online/   Our 2026 conference info & tickets: https://thevoiceofearlychildhood.com/early-years-conference-2026/   Listen to more: If you enjoyed this episode, you might also like: ●      Tummy time is an outdated notion, by Christine Wilkinson & Rachel Tapping: https://thevoiceofearlychildhood.com/tummy-time-is-an-outdated-notion/ ●      Starting school: Supporting transitions to reception and key stage 1, by Delyth Linacre: https://thevoiceofearlychildhood.com/starting-school-supporting-transitions-to-reception-and-key-stage-1/   Get in touch and share your voice: Do you have thoughts, questions or feedback? Get in touch here! – https://thevoiceofearlychildhood.com/contact/   Episode break down: 00:00 – Welcome & guest introduction: Rebecca Mottram 03:10 – Why potty learning is in the spotlight & new England guidance overview 07:10 – Reframing potty learning as a developmental journey (moving away from "ready") 11:45 – "Nappies off" as the final step: capability, gradual skill-building, avoiding sudden transitions 17:05 – Practical foundations before nappies come off: sensory feedback & bathroom routines 20:50 – Rebecca's new book Positively Potty 22:10 – Nappies: cloth vs disposable & using nappies "mindfully" 25:55 – When should children be out of nappies? 29:20 – Starting school: curiosity over judgement 34:30 – Working in partnership with parents: earlier, joined-up support 36:40 – Regression and plateaus: learning isn't linear 39:10 – Motivating without treats: rewarding effort and engagement 41:20 – Play as the engine of potty learning: props, stories, role play 43:25 – Accidents & language: staying neutral; inclusive toileting practice   For more episodes and articles visit The Voice of Early Childhood website: https://www.thevoiceofearlychildhood.com

Refresher- The Pop Culture Therapy Podcast
Age Regression...I'm Just Kidding

Refresher- The Pop Culture Therapy Podcast

Play Episode Listen Later Feb 1, 2026 16:04


Good thing or destructive thing? It all depends.

Parenting After Trauma with Robyn Gobbel
{RE-RECORD} EP 15: Lying as a Trauma Driven Behavior

Parenting After Trauma with Robyn Gobbel

Play Episode Listen Later Jan 30, 2026 53:31


This episode originally aired in 2020. It's a very popular episode that deserved being updated because so many folks are still listening!***Lying is probably the behavior parents seek support with the most.  It's confusing.  It's triggering.  It's exhausting. We can use our x-ray vision goggles to get underneath the lying so we can respond in ways that actually sets the boundary and increases the possibility of helping our children developing more socially and relationally appropriate behaviors. Would you rather about Lying as a Trauma Driven Behavior? Check out my blog! https://robyngobbel.com/lying/Additional Resources:Lying as a Trauma Driven Behavior Infographic Free Resource Hub: RobynGobbel.com/FreeResourceHubEp 222: Lying, Stealing, Regression and Baby TalkRegister for the F R E E Focus on the Nervous System to Change Behavior webinar on February 3. Choose from 10am eastern, 8pm eastern, or just watch the recording.Register Here ---> RobynGobbel.com/webinar I would love to have you join me this March in Durango, CO for a 3-day, retreat style workshop: Presence in Practice: An experiential workshop into the neurobiology of how change happens.All details and registration ------> https://RobynGobbel.com/DurangoRegister by January 31 for $25 off! :::Grab a copy of USA Today Best Selling book Raising Kids with Big, Baffling Behaviors robyngobbel.com/bookJoin us in The Club for more support! robyngobbel.com/TheClubSign up on the waiting list for the 2027 Cohorts of the Baffling Behavior Training Institute's Immersion Program for Professionals robyngobbel.com/ImmersionFollow Me On:FacebookInstagram Over on my website you can find:Webinar and eBook on Focus on the Nervous System to Change Behavior (FREE)eBook on The Brilliance of Attachment (FREE)LOTS & LOTS of FREE ResourcesOngoing support, connection, and co-regulation for struggling parents: The ClubYear-Long Immersive & Holistic Training Program for Parenting Professionals: The Baffling Behavior Training Institute's (BBTI) Professional Immersion Program (formerly Being With)

Rätsel des Unbewußten. Ein Podcast zu Psychoanalyse und Psychotherapie
Pseudotherapien: Wie sie funktionieren – woran man sie erkennt

Rätsel des Unbewußten. Ein Podcast zu Psychoanalyse und Psychotherapie

Play Episode Listen Later Jan 30, 2026 59:59


In der „Psychoindustrie“ gibt es Angebote, die wie Therapie wirken, aber keine sind. In Pseudotherapien können problematische Dynamiken entstehen – von subtiler Grenzverschiebung bis hin zu Destabilisierung und maligner Abhängigkeit. Woran erkennt man seriöse Hilfe, und ab wann wird es kritisch? Wir ordnen zentrale Mechanismen (u. a. Übertragung, Regression, Macht- und Abhängigkeitsverhältnisse) ein und besprechen das an drei Fallbeispielen. - Skript zu dieser Folge: https://www.patreon.com/posts/149413521 Literaturempfehlung zur Folge: Diana Pflichthofer (2024). Die Psychoindustrie. Wien: Goldegg Verlag. https://amzn.to/4rg1tlO Hilfsmöglichkeiten bei psychischen Krisen: https://www.stiftung-gesundheitswissen.de/gesundes-leben/psyche-wohlbefinden/hilfe-bei-psychischen-problemen-diese-stellen-koennen-sie-sich In psychischen Krisen können auch Hausarzt/ärztin, Psychiater/in und Psychotherapeut/innen Ansprechpartner sein. In Notfällen kann man sich zudem an eine psychiatrische Klinik wenden. Rätsel-des-Unbewussten-Abo als Geschenk: https://www.patreon.com/raetseldesubw/gift Beschreibung der Level-Inhalte: https://www.patreon.com/c/raetseldesubw/membership Wenn ihr alle bisher erschienenen handgebundenen Hefte bekommen wollt (12 Hefte) => Jahresabo auf dem Level "Liebhaber" Literaturempfehlung zur Folge: Auchter, T (2019): Trauer. Gießen: Psychosozial. Auchter, T (1995). Über das Auftauen eingefrorener Lebensprozesse. Winnicotts Konzepte der Behandlung schwerer seelischer Erkrankungen. Forum der Psychoanalyse, 11, 62–83. Haas, E (1998): Rituale des Abschieds: Anthropologische und psychoanalytische Aspekte der Trauerarbeit. Psyche, 52, 5, 450–470 Volkan, V (1981): Linking Objects and Linking Phenomena. A Study of the Forms, Symptoms, Metapsychology and Therapy of complica - Vertiefungsfolge "Beendigung von Therapien" auf Patreon: https://www.patreon.com/posts/127931630 - Folge zu Glenn Gabbard und den "liebeskranken" Analytiker: https://www.patreon.com/posts/121877727?collection=148939 Skript zu dieser Folge: https://www.patreon.com/posts/145065724 Kontakt: lives@psy-cast.org Erziehungskonzepte psychoanalytisch betrachtet (5 Teile): https://www.patreon.com/collection/148943 Digitaler Lesekreis zum Thema "Wie die Digitalisierung unsere psychische Struktur verändert" (1. Folge ist frei zugänglich): https://www.patreon.com/posts/lesekreis-werner-94838102 - Bestellung unseres Buches über genialokal: https://www.genialokal.de/Produkt/Cecile-Loetz-Jakob-Mueller/Mein-groesstes-Raetsel-bin-ich-selbst_lid_50275662.html und überall, wo es Bücher gibt. Auch als Hörbuch! - Link zu unserer Website: www.psy-cast.de - **Wir freuen uns auch über eine Förderung unseres Projekts via Paypal**: https://www.paypal.com/donate/?hosted_button_id=VLYYKR3UXK4VE&source=url - Anmeldung zum Newsletter: https://dashboard.mailerlite.com/forms/394929/87999492964484369/share Auf www.patreon.com/raetseldesubw finden sich noch viele weitere, spannende Themen (etwa eine Gesprächsreihe über berühmte Psychoanalytikerinnen und Psychoanalytiker, über die Tiefenpsychologie und Kulturgeschichte von Farben, Erziehung von damals bis heute...). Zudem gibt es hier die Skripte zu allen unseren Folgen. Musik: Evergreen, Kintsugi (licenced via premiumbeat.com)

The Autism Mom’s Potty Talk Podcast
Ep 62 - The Stage Where Most Parents Get Stuck

The Autism Mom’s Potty Talk Podcast

Play Episode Listen Later Jan 29, 2026 19:39


Your child goes every time you take them. No more diapers. You're reminding them every two or three hours and they're keeping their pants dry. You did it, right?Not yet.In this episode, I'm talking about the stage where most parents get comfortable—and stuck. That in-between place where your child will use the toilet when prompted, but they're not going on their own. I call it prompt dependence, and it's where so many families park themselves because they're scared of losing the progress they worked so hard to get.I get it. You spent years in diapers. You finally got them going. Now I'm asking you to shake the tree again? Yes. Because true potty training independence isn't your child going when YOU tell them to—it's them feeling nature call and answering it themselves.I'm breaking down why parents get stuck here, what's really driving the fear of “regression,” and how to widen the window so your child can start hearing their own body instead of waiting for your voice.In This Episode:The difference between going on command and true independenceWhy the autism community's fear of regression keeps parents stuckWhat “widen the window” means and how to do itHow to handle accidents during this stage without losing progressWhy your child won't start from zero if they regress—and the science behind thatThe mindset shift that changes everything: “I created this result, so I can create it again”Key Takeaways:If you're reminding your child every 2-3 hours and they're going, that's a win—but it's not the finish line.There are two voices that can tell your child when to go: your external voice and their body's internal voice. They've learned to listen to yours. Now they need the opportunity to hear theirs.Regression isn't death. When you get back on the horse, you don't start at zero—you start just a little before where you were and pick up quickly.Your survival brain and your child are both going to fight for comfort. Your higher brain knows what's possible.You didn't get this far by accident. You worked for it. That means you can get it back—and go further.Quotables:“We didn't have babies just to live survival lives. We had babies to teach them to thrive.”“True independence is when the child feels their body needing to go, and they go release on their own.”“If I created that result, I can create it again. It didn't just happen to me. It wasn't magic potty pixie dust.”“Every up-level is shaking your tree. It's going to be uncomfortable. Your brain's going to tell you all the reasons not to mess things up. That's not the life you signed up to live.”

Deeper Look At The Parsha
AVOIDING THE TRAP OF REGRESSION

Deeper Look At The Parsha

Play Episode Listen Later Jan 29, 2026 8:28


Freedom doesn't always make people braver. Sometimes it makes them afraid. From ancient Egypt to modern politics, moments of success often trigger a dangerous instinct to retreat into familiar but destructive ideas. Drawing on history and the Torah's account of the Red Sea, Rabbi Dunner explores why even when regression feels good, resisting it is the real test of moral maturity.

Politics in Question
How Does Transformation Lead to Regression?

Politics in Question

Play Episode Listen Later Jan 28, 2026 54:46


In this week's episode of Politics in Question, Lee and James talk with their former co-host Julia Azari about the role of presidents in shaping of racial norms. Azari is a Professor of Political Science at Marquette University and author of Backlash Presidents (Princeton University Press, 2025). How have presidents shaped racial norms? Why was President Andrew Johnson a “backlash president”? What role does Congress play in coalition-building and norm shaping? These are some of the questions Lee and James explore in this week's episode. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Voices of The Vic
Regression at Rovers | VOTV Podcast | Blackburn Rovers 1-1 Watford Match Reaction

The Voices of The Vic

Play Episode Listen Later Jan 25, 2026 92:46


On today's episode of Voices of the Vic, Mike Duffy is joined by Cam Smart & Joe Thomas to break down Watford's frustrating 1–1 draw with Blackburn Rovers at Vicarage Road. Once again, the Hornets struggled to break down a side coming into the game in poor form. The lads discuss: •

The 'X' Zone Radio Show
Rob McConnell Interviews - PETRENE SOAMES - Self Healing and Time Travel

The 'X' Zone Radio Show

Play Episode Listen Later Jan 24, 2026 60:09 Transcription Available


Petrene Soames is an intuitive healer and regression therapist known for her work in self-healing, past-life exploration, and time-travel consciousness through deep trance and guided regression. Soames explores how consciousness can move beyond linear time, allowing individuals to access past, parallel, or future experiences that contribute to emotional healing, physical relief, and spiritual understanding. Her work emphasizes personal empowerment, soul memory, and the idea that healing can occur across time by addressing root causes stored within consciousness itself.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-x-zone-radio-tv-show--1078348/support.Please note that all XZBN radio and/or television shows are Copyright © REL-MAR McConnell Meda Company, Niagara, Ontario, Canada – www.rel-mar.com. For more Episodes of this show and all shows produced, broadcasted and syndicated from REL-MAR McConell Media Company and The 'X' Zone Broadcast Network and the 'X' Zone TV Channell, visit www.xzbn.net. For programming, distribution, and syndication inquiries, email programming@xzbn.net.We are proud to announce the we have launched TWATNews.com, launched in August 2025.TWATNews.com is an independent online news platform dedicated to uncovering the truth about Donald Trump and his ongoing influence in politics, business, and society. Unlike mainstream outlets that often sanitize, soften, or ignore stories that challenge Trump and his allies, TWATNews digs deeper to deliver hard-hitting articles, investigative features, and sharp commentary that mainstream media won't touch.These are stories and articles that you will not read anywhere else.Our mission is simple: to expose corruption, lies, and authoritarian tendencies while giving voice to the perspectives and evidence that are often marginalized or buried by corporate-controlled media

The Southern Tea
New Beginnings & Emotional Regression Challenges feat. Kayla

The Southern Tea

Play Episode Listen Later Jan 21, 2026 83:29


On today's episode, Lindsie addresses Kristen's departure to prioritize her health. She also opens up about the emotional challenges of parenting through divorce, discussing Jackson's quiet emotional regression that has her worried again. Kayla and Lindsie dive into junk drawers and fitted sheets, a shocking fact about hotel comforters, and also talk attachment styles.Follow us @TheSouthernTeaPodcast for more! Visit Kayla @corporatespiritguideThank you to our sponsors!Brooklyn Bedding: Visit BrooklynBedding.com and promo code SOUTHERNTEA for 30% off sitewideHomeserve: Plans start at just $4.99 a monthNutrafol: Get $10 off your first month's subscription and free shipping when you go to Nutrafol.com and enter code SOUTHERNTEAProgressive: Visit Progressive.com to learn more!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Voice Junkie
VOICE JUNKIE PODCAST 094 - MCDERMOTT FIRED, CJ STROUD REGRESSION (AUDIO)

Voice Junkie

Play Episode Listen Later Jan 21, 2026 19:05


In Episode 94, Chuck focuses on this past week's NFL Divisional Playoff games.

Voice Junkie
VOICE JUNKIE PODCAST 094 - MCDERMOTT FIRED, CJ STROUD REGRESSION (AUDIO)

Voice Junkie

Play Episode Listen Later Jan 21, 2026 19:02


In Episode 94, Chuck focuses on this past week's NFL Divisional Playoff games.

Get Up!
Hour 2: Historic Hoosiers, Bears Regression, Mendoza Top Pick

Get Up!

Play Episode Listen Later Jan 20, 2026 48:37


Get Up resumes with the most improbable champions in history... Mendoza and Cignetti finished off the Hoosiers undefeated season. We praise them like we should. (0:00) Meanwhile - was the magic at the midway all smoke and mirrors? Were the Bears one year wonders, or did we just see the beginning of the NFL's next dynasty? (14:30) Then - the college football season has wrapped up, so it's time for draft talk! Did Fernando Medonza prove enough for the Raiders to draft him first overall? (23:40) Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Past Lives Podcast
Commentary on a Past Life Regression

The Past Lives Podcast

Play Episode Listen Later Jan 19, 2026 63:26


This week is a little different. Late last year Craig Meriwether took me through a past life regression and in this episode Craig and I listen to the recording and discuss what happened.Are you curious about the mysteries of reincarnation and the healing potential of past life regression? This groundbreaking guide reveals profound insights into the soul's journey through time —drawing inspiration from the work of pioneering authors like Brian Weiss, Michael Newton, Dolores Cannon, and others.In The Past Lives Guidebook, you'll explore a transformative blend of science, neurobiology, spirituality, and real-life stories of healing through past life therapy. Inside, you'll discover:A deep dive into the core principles of past life regression and its power to support emotional, spiritual, and even physical healingHow various religious and spiritual traditions—including Hinduism, Buddhism, Judaism, Christianity, and indigenous cultures—understand reincarnation, karma, and the soul's evolutionScientific insights from biology and neuroscience that reveal how past life experiences may influence current behaviors, emotions, and health issuesStep-by-step overview of how a past life regression session works, and how it can be used for self-discovery, healing, and personal transformationA fascinating look at future life exploration—and how glimpses of your possible futures can inform and empower your choices todayPast life regression is more than a tool for healing—it's a pathway to living with deeper purpose, clarity, and connection to your soul's wisdom.Whether you're seeking to uncover hidden memories, release emotional wounds, or explore the infinite possibilities of your soul's journey, this accessible and compelling guide invites you to step beyond the limits of time and discover the healing potential that lies within.Craig Meriwether is a mindset coach and clinical hypnotherapist who helps people release negative emotions, trauma, and limiting beliefs so they can reach their full potential.A Certified Clinical Hypnotherapist, Medical Hypnosis Specialist, and NLP Practitioner, Craig is the founder of Arizona Integrative Hypnotherapy and Sacred Mystery Hypnotherapy. For over 12 years, he has worked with clients worldwide—helping people heal from childhood trauma, supporting cancer patients with pain control, assisting veterans with PTSD, guiding students through test anxiety, empowering entrepreneurs with confidence, coaching athletes toward peak performance, and helping anyone struggling with fear, anxiety, or overwhelm.Through Sacred Mystery Hypnotherapy, Craig specializes in spiritual healing, including past life regression, spirit world regression, and connecting clients with spirit guides and ancestors. He offers private online sessions, workshops, and multi-day retreats across the U.S. and internationally.Craig is a graduate of the Hypnotherapy Academy of America, completing 500 hours of Clinical Hypnotherapy Training and earning his Certification as a Medical Hypnosis Specialist, along with 200+ hours of advanced study in hypnotherapy and NLP.BioCraig Meriwether, CHT-CMS, is a leader in the field of past-life regression and hypnotherapy. He has conducted thousands of sessions, helping people connect with their past lives, receive guidance from the spirit world, and heal from trauma, emotional blocks, and fear in their current life. Through his company, Sacred Mystery Hypnotherapy, Craig offers oneon-one past-life regression sessions, as well as workshops and multi-day retreats both nationally and internationally. Craig is a graduate of the renowned Hypnotherapy Academy of America, where he completed 500 hours of classroom-style training to become a Certified Clinical Hypnotherapist (CHT), earning additional Certification as a Medical Hypnosis Specialist (CMS).His training was taught by leading experts in hypnotherapy and medical professionals. He has also completed over 200 hours of continuing education in hypnotherapeutic techniques, past-life regression, and neuro-linguistic programming (NLP). He is the author of Depression 180, praised by Wendy Love, creator of DepressionGateway.com, as “one of the best, most thorough books on depression I have read.” Psychologist Dr. Steven Gurgevich described it as “the most comprehensive and user-friendly resource to help ourselves and loved ones struggling with depression.” Craig is also the creator of The Mind Mastery Blueprint and the Life Transformation Kit, and he is a featured author in the New York Times bestselling book Pearls of Wisdom: 30 Inspirational Ideas to Live Your Best Life Now!, alongside Jack Canfield, Marci Shimoff, and Janet Attwood.https://sacredmysteryhypnotherapy.com/https://craiginreallife.com/ https://www.pastliveshypnosis.co.uk/https://www.patreon.com/ourparanormalafterlifeMy book 'Verified Near Death Experiences' https://www.amazon.com/dp/B0DXKRGDFP Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Liberty Yell
The Liberty Yell #122: The Right Way

The Liberty Yell

Play Episode Listen Later Jan 16, 2026 95:24


We are back for Episode 122, and man, do we have a lot to talk about!  - 0-4-1 since the Anaheim game - The Flyers missed Drysdale the most - Regression is here - Dvorak extension talk  - 9 goals for in the last 5 games, and three of those goals (maybe more) were in garbage time.  - PP and PK are really, really bad  - Dan Vladar is hurt. What happens now?  - Questions from our viewers  - Tocchett/Michkov continues  - Denver Barkey can't go back anytime soon  - What does the trade deadline hold?  and a ton more. JOIN US!!!!    FOLLOW US ON X, IG AND FACEBOOK @THELIBERTYYELL 

UAP - Unidentified Alien Podcast
UAP EP 181 The Antonio Alves Story part 2 - Insectoids, Alien Home Invasion, and Hypnotic Regression

UAP - Unidentified Alien Podcast

Play Episode Listen Later Jan 16, 2026 51:34


In this shocking part two, Antonio Alves recalls more incredible details about his ongoing experiences with alien beings. How did his girlfriend see him as an insectoid? Why did 4 greys invade his home? And get ready to hear, in vivid detail, his word for word recollection of an alien abduction through a documented hypnotic regression session. All of this and much more right now... Go to surfshark.com/UAP or use code UAP at checkout to get four extra months of Surfshark VPNSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Celtics Reddit Podcast
A rough patch or a regression to the mean?

Celtics Reddit Podcast

Play Episode Listen Later Jan 14, 2026 63:50


The Pacers loss has us in our feelings and second-guessing the ceiling of this otherwise inspiring Celtics team. Sam LaFrance of Hardwood Houdhini and the How 'bout Them Celtics pod joins us to discuss all the latest with our favorite team. Learn more about your ad choices. Visit megaphone.fm/adchoices

Inner Journey with Greg Friedman
Inner Journey with Greg Friedman welcomes Karen Kubicko

Inner Journey with Greg Friedman

Play Episode Listen Later Jan 14, 2026 110:07


Greg welcomes Karen Kubicko to Inner Journey. Karen Ann Kubicko is a psychic intuitive, certified hypnotherapist, past life regression expert, and Reiki Master dedicated to helping others on a soul level. With nearly 200 of her own past life memories, Karen shares her insights in Life Is Just Another Class, Making True Love, and 10 Signs You've Lived a Past Life, inspiring others on their journey of self-discovery and spiritual awakening.

Data Gurus
Synthetic Sample with Carol Sue Haney of Qualtrics

Data Gurus

Play Episode Listen Later Jan 13, 2026 13:51


Host Sima Vasa welcomes Carol Sue Haney, Head of Research and Data Science, Engineering at Qualtrics, to discuss the transformative role of AI and data science in the market research industry.  Carol Sue explains Qualtrics’ early bet on generative AI and the development of proprietary LLMs, moving into agentic work and synthetic sampling, which she predicts will rival non-probability human sampling for quick-turn research. She emphasizes the challenges CMOs face with data overload and the fundamental importance of using regression analysis to link customer experience (CX) data, including the surprising weight of marketing messages, to crucial business outcomes like renewal and revenue growth.  Key Takeaways: 00:00 Introduction.03:12 Data research careers spanned decades before computers existed.06:35 Early generative AI investment provides significant competitive advantages.09:20 Synthetic research boosts accuracy using rich, proven seed data.13:02 AI models instantly incorporate new information for continuous improvement.17:09 Regression remains essential for identifying true business drivers.20:42 Curated data and guided AI make regression faster and reliable.24:18 Financial independence through careers empowers women in critical ways.25:42 Mentorship and knowledge sharing strengthen the entire research industry. Resources Mentioned: Qualtrics | Website #Analytics #MA #Data #Strategy #Innovation #Acquisitions #MRX #Restech

Seek Reality – Roberta Grimes
Andy Tomlinson and Reena Kumarasingham Talk About Between-Lives Regression

Seek Reality – Roberta Grimes

Play Episode Listen Later Jan 13, 2026 50:18


Dear friends, right on the edge of modern cutting-edge therapy is something that very few modern therapists yet can do,... The post Andy Tomlinson and Reena Kumarasingham Talk About Between-Lives Regression appeared first on WebTalkRadio.net.

Alternative To What?
Alternative To What? - Episode January 8, 2026

Alternative To What?

Play Episode Listen Later Jan 9, 2026


Guest host this week, Andrew Waller, host of Breaking The TethersPlaylist: Etran De L'Aïr - ImouhaHoward Roberts - Unfolding inDory Hayley - Scenes from MacBeth, I. wyrd sistersBlack Sabbath - Wishing wellCepheidae Variable - OvertureJeff Tweedy - Amar BharatiPassport - Get yourself a second passportChristina Ruf - Joint written in the marginalia of timePot pourri - Beat raveTeejay Riedl - Sombre reptilesKrakhouse - We go blam blamAmnesiac Quartet - BodysnatchersRichard Leo Johnson - Bob PeanutGladhanding - Slow cook cold shoulderPhil Miller, In Cahoots - Big DickBlue Oyster Cult - Teen archerPentangle - The snowsWarren Zevon - Reconsider meThe Kasambwe Brothers - Langizani mwachikondiMahavishnu Orchestra - Lila's dancekitschmonger - Regression toward the meanEngrupid Pipol - Inspireichon burn

Conservative Review with Daniel Horowitz
The Great Regression of Artificial Unintelligence and How to Fix It | 1/8/26

Conservative Review with Daniel Horowitz

Play Episode Listen Later Jan 8, 2026 67:24


We begin by addressing Trump's order to ban corporate purchase of residential homes. While I agree with reversing this odious trend, we must not forget that it is a mere symptom of the broader problem of high prices created by government debt, Federal Reserve policies, and HUD policies. Trump is treating the symptom while supporting all the governmental policies that caused high prices. Next, we're joined by Tim Estes, an AI entrepreneur, who passionately makes the case that our government is working with the wrong companies and the wrong strategy on perfecting AI. The AI path we are on is supplanting human dignity, unnaturally drawing investment away from more promising aspects of the technology, and will actually result in losing to China. Estes has an app he is developing to help parents harness technology to make the internet experience human-led and to combat all of the mental, cognitive, and social ills this technology has wrought in recent years.  Learn more about your ad choices. Visit megaphone.fm/adchoices

KNBR Podcast
Adam Caplan with preview suggesting offensive regression but stout 2025 defense

KNBR Podcast

Play Episode Listen Later Jan 8, 2026 18:08


Adam Caplan previews 49ers/Eagles and talks about the offensive line regression this season but points to strong defense as key for Philly in Wild Card showdownSee omnystudio.com/listener for privacy information.

Tolbert, Krueger & Brooks Podcast Podcast
Adam Caplan with preview suggesting offensive regression but stout 2025 defense

Tolbert, Krueger & Brooks Podcast Podcast

Play Episode Listen Later Jan 8, 2026 18:08


Adam Caplan previews 49ers/Eagles and talks about the offensive line regression this season but points to strong defense as key for Philly in Wild Card showdownSee omnystudio.com/listener for privacy information.

Joe Rose Show
HR 4- Aikman's Impact, Tua's Regression, Panthers Check-in

Joe Rose Show

Play Episode Listen Later Jan 5, 2026 36:49


Joe dives into NFL upheaval as the Browns fire head coach Kevin Stefanski and the Dolphins make a puzzling move by bringing in Troy Aikman to assist with their GM search, despite Dan Marino's longstanding presence in the organization. After another brutal cold-weather loss in New England, Joe questions whether Tua's physical regression and shaken confidence are holding Miami back and stresses the need for a truly weather-proof quarterback. The discussion turns to offseason chaos in Miami, including the real possibility of Lamar Jackson becoming a target. The hour wraps with a check-in on the Florida Panthers at the midway point of the season, with optimism around the returns of Matthew Tkachuk and Aleksander Barkov.

Your New Puppy: Dog Training and Dog Behavior Lessons to Help You Turn Your New Puppy into a Well-Behaved Dog

Potty training was going so well… until it suddenly wasn't. Let’s discuss why potty training regression is completely normal, what it actually means, and how to get back on track without feeling like you’ve failed your puppy. You have a stretch of good days, you start to feel confident, and then all of a sudden, the accidents are back. It feels like you’re back at square one, you start asking what you did wrong, and you might even wonder what’s wrong with your puppy. Here is the truth.Regression is normal. It doesn’t mean you failed. It doesn’t mean your puppy forgot everything. It means you are in the middle of the process. If this episode really resonates, I also recommend Episode 79: “Are You Still Struggling With Potty Training?” where I walk through the three key points I always cover with clients who are stuck. In this episode, I talk about: What potty training actually looks like (progress in NOT a straight line) The most common reason for potty training regression. Why regression seems to happen right at the end. How a small change in routine can have a big impact on your progress. The difference between a random accident and true regression. Enjoying this podcast? Please rate and review it wherever you listen. This helps other puppy parents find it. Other resources mentioned and related to this episode: Complete Guide to Potty Training Your Puppy: A free download that walks you through my complete process YNP #065: YOUR New Puppy: My signature new puppy course that has helped hundreds of new puppy parents raise their puppies into well-mannered, happy dogs. Includes live support from me so you don’t have to do this alone! Additional Potty Training Episodes: YNP #009: Why Indoor Pads Should Not Be Used When House Training Your Puppy YNP #010: Complete Guide to Potty Training Your Puppy YNP #039: Common Potty Accidents YNP #041: Raising a Puppy in an Apartment YNP #056: Potty Training for a Working Household YNP #072: Should You Use a Potty Bell? YNP #079: Are You Still Struggling With Potty Training? YNP #085: How to Handle Potty Training Overnight YNP #090: When Your Dog Hates the Rain YNP #092: Why Not to Punish Puppy Potty Accidents

Business Pants
2025 QUIZ: women on boards, ESG regression, DEI rebrands, plus 2026 headline predictions

Business Pants

Play Episode Listen Later Dec 23, 2025 65:44


2025 REVIEW QUIZ:True or False: Nearly half of directors think their board adds insufficient value.What percentage of directors said their board adds no value at all? A) 10% B) 18% C) 31% D) 69% (nice)True or False: Women run 11% of Fortune 500 companies in 2025.True — 11%. Don't clap.Women hold 24% of CEO pipeline roles but only ___% of promotions. A) 24% B) 16% C) 8% D) 0%, if the board had its wayWhich company plans to automate up to 90% of privacy and societal risk reviews using AI? A) OpenAI B) Meta C) Google D) Twitter (sorry, “X”)Why did BlackRock get removed from Texas' boycott list? A) Legal challenge B) Accounting error C) ESG retreat D) They promised not to say “climate” out loudWhy did PepsiCo say it delayed its net-zero target from 2040 to 2050? A) The board miscalculated emissions B) Shareholders voted against climate goals C) A change in climate accounting rules D) “The systems around us” weren't readyTrue or False: UK financial regulators scrapped mandatory rules because “DEI paperwork is annoying.”True: UK financial regulators scrapped mandatory DEI rules citing regulatory burden.The new acronym JPMorgan prefers over “DEI” is:D&IEDIDOI“Diversity, Opportunity & Inclusion”“Please Stop Asking”Which word even became unsafe during federal climate language purges? A) Sustainability B) Climate C) Resilience D) All of them, cowardWhich CEO criticized ISS and Glass Lewis as “incompetent”? A) Elon Musk B) Jamie Dimon C) Larry Fink D) All men eventuallyWhich phrase best describes modern CEO accountability? A) Robust B) Improving C) Optional D) DecorativeHaw many women have founded and led a Fortune 500 company?oneBonus: Who was that woman?Marion Sandler: Co‑founder and co‑CEO (with her husband Herbert Sandler) of Golden West Financial. True or False: Board gender diversity plateaued around 30%.True — Progress hit a ceiling and called it success.What % of Russell 3000 boards have 50% women?6%15%22%Enough to declare victoryTrue or False: MI6 appointed its first female chief in 2025.True — MI6 got there before corporate America. Blaise MetreweliWhich ESG metric disappeared first from earnings calls?Diversity statisticsEmissions targetsHuman rights languageAll of the above, but quietlyThe most common excuse for oversized boards:ComplexityGlobal reach“We need all these people”Founder feelingsWhich industry saw the biggest rollback in ESG commitments?EnergyFinanceConsumer packaged goodsTech pretending it's neutralWhat's the fastest-growing category of CEO compensation?Cash bonusesStock optionsPerformance shares“Retention” awards for stayingWhat's the most common DEI rebrand in 2025?BelongingCultureTalent strategyRisk managementWhat actually drives CEO pay upward during stock declines?Peer benchmarking“Retention risk”Board discretionFearWhy are women overrepresented in “glass cliff” roles?Risk toleranceCrisis opticsLimited pipelineConvenient scapegoatingWhat is the most accurate definition of “independent director” in 2025?No financial tiesNo employment tiesNo visible conflictNo intention of rocking the boatScoring Rubric23–25 correct: “Governance Adult” You actually listen. Disturbing.18–22 correct: “Proxy Advisor Apologist” You skimmed. You nodded. You missed the point.13–17 correct: “Boardroom Vibes Guy” You believe independence is a feeling.8–12 correct: “CEO Whisperer” You think pay packages are earned and boards try their best.Below 8: “Kimbal Musk” Please stop hosting the showWhich of these headlines are most likely to occur in 2026:Elon Musk announces Groxxx69, the latest iteration of Grok AI dedicated entirely to porn, 69, weed, pro wrestling, Call of Duty, and matchbox cars: 2DoorDash announces a 12 year $8.4bn pay package for CEO Tony Xu: 9DoorDash announces cutting staff 80% due to AI: 8Costco Caves to Trump, Cuts DEI: 1ISS and Glass Lewis announce new zero page voting policy: 5Brian Cornell resigns from Target board: 7CEO of McDonald's refuses to resign after admitting to affair with other executives: 8Sam Altman says he is terrified: 6Shareholders overwhelming vote out directors early in proxy season: 9Tim Cook announces retirement in 2028: 1

The Kevin Sheehan Show
Can Jayden Daniels' "regression" be tied to Kliff Kingsbury's scheme?

The Kevin Sheehan Show

Play Episode Listen Later Dec 17, 2025 21:12


12.17.25, Kevin Sheehan asks callers for their biggest concern for Jayden Daniels going into next year and beyond.

Creating Wealth Real Estate Investing with Jason Hartman
2366 FBF: Land-to-Improvement Ratios & Regression to Replacement Cost

Creating Wealth Real Estate Investing with Jason Hartman

Play Episode Listen Later Dec 13, 2025 31:23


This Flashback Friday is from episode 461, published last January 5, 2015.  On today's Creating Wealth Show, Jason Hartman talks about the vital side of investing that is construction cost. As an investor within real estate, it's so important to know the situation, whether it be adjusting how much you pay contractors to match with the area itself or knowing just how much the replacements to your property would be compared with the actual cost price.   #JasonHartman #CreatingWealth #RealEstateInvesting #IncomeProperty #RegressionToReplacementCost #LTIRatios #LandToImprovementRatio #ConstructionCost #InvestmentRisk #CashFlowInvesting #PrudentInvestor #PackagedCommodities #NAHB #CyclicalMarkets #LinearMarkets #Birmingham #MeetTheMasters #FinancialFreedom #PropertyDepreciation #TaxBenefits #HartmanRiskEvaluator #FlashbackFriday #PropertyPerformer #CaliforniaRealEstate #TexasRealEstate #Homebuilders #ValueDrivers   Follow Jason on TWITTER, INSTAGRAM & LINKEDIN Twitter.com/JasonHartmanROI Instagram.com/jasonhartman1/ Linkedin.com/in/jasonhartmaninvestor/ Call our Investment Counselors at: 1-800-HARTMAN (US) or visit: https://www.jasonhartman.com/ Free Class:  Easily get up to $250,000 in funding for real estate, business or anything else: http://JasonHartman.com/Fund CYA Protect Your Assets, Save Taxes & Estate Planning: http://JasonHartman.com/Protect Get wholesale real estate deals for investment or build a great business – Free Course: https://www.jasonhartman.com/deals Special Offer from Ron LeGrand: https://JasonHartman.com/Ron Free Mini-Book on Pandemic Investing: https://www.PandemicInvesting.com

land cost replacement regression special offer ratios free courses jason hartman ron legrand creating wealth show pandemicinvesting hartman us save taxes estate planning protect get ron free mini book fund cya protect your assets
MIND your hormones
549. [UPDATE] Potty Training Part 2: the regression I wasn't anticipating & what we're doing about it

MIND your hormones

Play Episode Listen Later Dec 12, 2025 15:30


In today's episode, we're diving into potty training: part two because we were thrown a curveball, and I knew I had to talk about it. I never want you to feel like you're alone in this stage of parenting, and if you've experienced a potty-training regression, let this be your reminder: you're not doing anything wrong. Regressions are common, totally normal, and you are definitely not alone!Ways to work with Corinne: Join the Mind Your Hormones Method, HERE! (Use code PODCAST for 10% off!!)Mentioned in this episode: Check out GutPersonal products here & their testing packages HERE! Code CORINNE saves you 10% on supplements (& on testing!) or take the GutPersonal Quiz to find out exactly which supplements are best for your unique situation! My amazon list with everything we used!FREE TRAINING! How to build a hormone-healthy, blood-sugar-balancing meal! (this is pulled directly from the 1st module of the Mind Your Hormones Method!) Access this free training, HERE!Join the Mind Your Hormones Community to connect more with me & other members of this community!Come hang out with me on Instagram: @corinneangealicaOr on TikTok: @corinneangelicaEmail Fam: Click here to get weekly emails from meMind Your Hormones Instagram: @mindyourhormones.podcast Disclaimer: always consult your doctor before taking any supplementation. This podcast is intended for educational purposes only, not to diagnose or treat any conditions. 

The Kevin Sheehan Show
Is Jayden Daniels' confidence shook due to the Commanders' regression?

The Kevin Sheehan Show

Play Episode Listen Later Dec 9, 2025 19:30


12.9.25 Hour 2, Kevin Sheehan and callers debate whether Jayden Daniels should have gone back into the game vs Vikings and if his confidence is shaken after the down season for the Commanders.