POPULARITY
Categories
In March of 2015, Denise Huskins was kidnapped, drugged, sexually assaulted, and held for 48 hours. When she was released, police called it a hoax and demanded that she apologize for wasting resources. The media dubbed it the "Gone Girl" case and death threats started flooding in. Except it wasn't a hoax at all. It was a Harvard-educated serial rapist named Matthew Muller who'd been terrorizing California for years. In this episode, we'll go through the kidnapping, the police misconduct that revictimized the survivors, Detective Misty Carausu's brilliant investigative work that finally caught Muller, and how Denise and Aaron turned trauma into national advocacy. From victims to suspects to survivors...their story changed how law enforcement handles sexual assault cases across America.For Survivors of Sexual Violence:- RAINN National Sexual Assault Hotline: 1-800-656-HOPE (4673)- RAINN Online Chat:https://hotline.rainn.org/online- Crisis Text Line: Text HOME to 741741- National Sexual Violence Resource Center:https://www.nsvrc.org/For Victims of Police Misconduct:- ACLU:https://www.aclu.org/- National Police Accountability Project:https://www.nlg-npap.org/- Innocence Project:https://innocenceproject.org/Mental Health Support:- National Suicide Prevention Lifeline: 988- SAMHSA National Helpline: 1-800-662-4357- Psychology Today Therapist Finder:https://www.psychologytoday.com/us/therapistsSources:San Francisco Chronicle (Henry K. Lee's Reporting):- https://www.sfchronicle.com/ (Search "Denise Huskins" for extensive archive)Major National News Outlets:- https://abcnews.go.com/ - https://www.nbcnews.com/ - https://www.cnn.com/ - https://www.nytimes.com/ - https://www.latimes.com/ - https://www.usatoday.com/ Bay Area Local News:- https://www.ktvu.com/ - https://www.kron4.com/ - https://www.mercurynews.com/ - https://www.sfgate.com/ - https://www.timesheraldonline.com/ People Magazine & Entertainment:- https://people.com/ (Search "Denise Huskins" for features)American Nightmare (2024):- https://www.netflix.com/title/81456520 "Victim F: From Crime Victims to Suspects to Survivors" (2021):- https://www.amazon.com/Victim-Crime-Victims-Suspects-Survivors/dp/1538720558Federal Court Case:- https://www.justice.gov/usao-edca - Case: USA v. Matthew Daniel Muller, Case No. 2:15-cr-00242-TLN- https://www.pacer.gov/ State Court Cases:- https://www.solano.courts.ca.gov/ - https://www.santaclaracourt.org/ - https://www.cc-courts.org/ Defamation Lawsuit:- Huskins v. City of Vallejo - Settled March 2018 for $2.5 millionDenise Huskins' Attorneys:- Doug Rappaport- https://www.rappaportlaw.com/ Aaron Quinn's Attorneys:- Daniel Russo- https://russoandrusso.com/ Law Enforcement Training:- The case is now taught at police academies nationwide- Featured in FBI training materials on sexual assault investigations- https://www.fbi.gov/services/training-academy Criminal History & Background:- https://www.bop.gov/inmateloc/ (Federal Bureau of Prisons Inmate Locator)- Search: Matthew Daniel Muller, Register Number: 04664-111California State Bar:- https://www.calbar.ca.gov/ - Search for Matthew Muller's disciplinary records and disbarmentYouTube:- https://www.youtube.com/@ABCNews - https://www.youtube.com/@DatelineNBC - https://www.youtube.com/@netflix 2015 News Archives:- https://www.newspapers.com/ - https://news.google.com/newspapers Articles Analyzing the Case:- https://www.vulture.com/ (Vulture - entertainment analysis)- https://www.rollingstone.com/ (Rolling Stone features)- https://www.vanityfair.com/ (Vanity Fair long-form)"Gone Girl" Film (2014):- https://www.imdb.com/title/tt2267998/ Denise & Aaron's Advocacy Work:- They've trained law enforcement agencies nationwide- Spoken at conferences on sexual assault investigation best practices- Worked with prosecutors on Muller's cold casesCalifornia Prosecutors' Recognition:- 2025: Named "Witnesses of the Year" by California prosecutors- https://www.cdaa.org/California District Attorneys Association:- https://www.cdaa.org/ (2025 Witnesses of the Year announcement)Snopes:- https://www.snopes.com/ (Search "Denise Huskins" for fact-checking)FBI Press Releases:- https://www.fbi.gov/news/press-releases (Search "Matthew Muller")U.S. Attorney's Office:- https://www.justice.gov/usao-edca/pr (Press releases on Muller's prosecution)Vallejo Police 2021 Apology:- Issued by Chief Shawny Williams on August 25, 2021- Archived in news articles and official city records$2.5 Million Settlement (March 2018):- City of Vallejo settled defamation lawsuit- No admission of wrongdoing required by settlement terms- Covered extensively in news mediaDenise & Aaron's Media Appearances:- ABC News 20/20- Dateline NBC- Various podcast interviews- Law enforcement training events- Public policy panelsBecome a supporter of this podcast: https://www.spreaker.com/podcast/reverie-true-crime--4442888/support.Keep In Touch:Twitter: https://www.twitter.com/reveriecrimepodInstagram: https://www.instagram.com/reverietruecrimeTumblr: https://reverietruecrimepodcast.tumblr.comFacebook: https://www.facebook.com/reverietruecrimeContact: ReverieTrueCrime@gmail.com Intro & Outro by Jahred Gomes: https://www.instagram.com/jahredgomes_official
John Canzano talks about Oregon State's athletic department, AD Scott Barnes, and the search for creativity and leadership. Subscribe to this podcast. Read JohnCanzano.com
Description: Jen revisits this fan favorite episode with Mel Robbins. Buckle up, listeners. It was only a matter of time before our paths crossed with Mel Robbins, one of the most respected experts on change and motivation in the zeitgeist, and today is that day. Known for being the host of the #1 ranking education podcast in the world, bringing deeply relatable topics, tactical advice, tools, and compelling conversations to her audiences, Jen and Amy spend today's hour diving into Mel's “Let Them” theory, which is taking the world by storm, already delivering instant peace and freedom in the lives and relationships of people putting it into practice. Together, they discuss: The difference between “Let Them” and “Let Me” Learning to release the white-knuckle grip we hold over other people's behavior (and other things beyond our control) Reframing disappointment to view it as a gift (yes, it's possible!) Repositioning self-worth inward, rather than leaving it dependent on others' opinions. Thought-provoking Quotes: “For a lot of women, we spend so much time upstairs in our heads as people-pleasers and over-analyzers, over-thinking and ruminating, trying to get things perfect. That's the last place I should be, personally. I need to drop into my body and get out of my head.” – Mel Robbins “People reveal who they are and what they care about through their behavior. Ignore their words. Watch their behavior. Let people be who they are. Let them do what they're going to do. Focusing on them is not where your power is.” – Mel Robbins “The difference between ‘not my business' and ‘let them' is worlds apart. When you say, ‘not my business', you're scolding yourself. With, ‘let them', you're in the power position because you see what's happening and are choosing to allow it without allowing it. You're rising above it.” – Mel Robbins Resources Mentioned in This Episode: Demotivators - https://despair.com/collections/ Effin Birds on Instagram - https://www.instagram.com/effinbirds/ Van Morrison - https://www.vanmorrison.com/ No Hard Feelings by the Avett Brothers - https://open.spotify.com/track/0bgQ1hQrpP6ScdBZlDfLE2 Foo Fighters - https://foofighters.com/ DePeche Mode - https://www.depechemode.com/ The Cure - https://www.thecure.com/ Taylor Swift - https://www.taylorswift.com/ The 5 Second Rule: Transform Your Life, Work, and Confidence with Everyday Courage by Mel Robbins - https://amzn.to/427OHwu The Let Them Theory: A Life-Changing Tool That Millions of People Can't Stop Talking About by Mel Robbins - https://amzn.to/4hc53bE The Mel Robbins Podcast - https://www.melrobbins.com/podcast The Four Questions: For Henny Penny and Anybody with Stressful Thoughts by Byron Katie - https://amzn.to/3C7tKXT My Legacy Podcast - https://www.iheart.com/podcast/1119-my-legacy-podcast-255793246/ Man's Search for Meaning by Viktor Frankl - https://amzn.to/4ajbyaz Dr. Stuart Ablon - https://www.stuartablon.com/ The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life by Mark Manson - https://amzn.to/3PCqxmi Guest's Links: Website - https://www.melrobbins.com/ Instagram - https://www.instagram.com/melrobbins/ Facebook - https://www.facebook.com/melrobbins Twitter - https://x.com/melrobbins Youtube - https://www.youtube.com/melrobbins TikTok - https://www.tiktok.com/@melrobbins Podcast - https://www.melrobbins.com/podcast/ Connect with Jen!Jen's Website - https://jenhatmaker.com/ Jen's Instagram - https://instagram.com/jenhatmakerJen's Twitter - https://twitter.com/jenHatmaker/ Jen's Facebook - https://facebook.com/jenhatmakerJen's YouTube - https://www.youtube.com/user/JenHatmaker The For the Love Podcast is presented by Audacy. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
"The Five" on Fox News Channel airs weekdays at 5p.m. ET. Five of your favorite Fox News personalities discuss current issues in a roundtable discussion. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In August 2004, 21-year-old Katara Deboise Johnson finished her shift as an assistant manager at Taco Bell in Taylor, Texas, and drove home to her mobile home on North Dolan Street. By the following evening, her grandmother would discover her shot to death inside her bedroom.Her car was missing. Her cell phone was gone. Hours after her death, someone answered her phone and claimed to be Katara before laughter echoed in the background and the call disconnected.Her maroon Mitsubishi Lancer was later found abandoned at the Thorndale Community Pool in neighboring Milam County, miles from her home. No weapon was recovered. No signs of forced entry were reported. More than 50 people were interviewed. Polygraphs were administered. The Texas Rangers and Department of Public Safety assisted. Still, no arrests have been made.In the months that followed, frustration grew. Family members publicly questioned whether enough resources were being devoted to the case. The NAACP launched its own inquiry. Katara's sister Kenyatta revealed she had been questioned as a possible suspect, something she strongly denied. Police have never publicly named a suspect.Years passed. Her mother died in 2012 without answers. In 2019, the Williamson County Sheriff's Office took over the investigation. Authorities now believe more than one person may know what happened that night, particularly how Katara's car ended up in Thorndale.If you have any information about the murder of Katara Debois Johnson, please contact Texas Crime Stoppers at (800) 346-3243.You can support Gone Cold and listen to the show ad-free at https://patreon.com/gonecoldpodcast Find us at https://www.gonecold.comFor Gone Cold merch, visit https://gonecold.dashery.comFollow Gone Cold on Facebook, Instagram, Threads, TikTok, YouTube, and X. Search @gonecoldpodcast at all or just click https://linknbio.com/gonecoldpodcast#JusticeForKataraJohnson #Taylor #WilliamsonCounty #WilCo #TX #Texas #TrueCrime #TexasTrueCrime #ColdCase #TrueCrimePodcast #Podcast #ColdCase #Unsolved #MissingPerson #Missing #Murder #UnsolvedMurder #UnsolvedMysteries #Homicide #CrimeStories #PodcastRecommendations #CrimeJunkie #MysteryPodcastBecome a supporter of this podcast: https://www.spreaker.com/podcast/gone-cold-texas-true-crime--3203003/support.
A federal judge shut down Defense Secretary Pete Hegseth amidst his attempts to retaliate against Democratic Senator Mark Kelly, one of six lawmakers who made the video reminding service members that they are not obligated to carry out illegal orders. Plus, the artificial intelligence CEO who has the world doing a double take. Learn more about your ad choices. Visit podcastchoices.com/adchoices
FBI agents and other law enforcement agencies are searching the desert vegetation near Nancy Guthrie's Tucson-area home. #CourtTV - What do YOU think? Binge all episodes of #ClosingArguments here: https://www.courttv.com/trials/closing-arguments-with-vinnie-politan/Watch 24/7 Court TV LIVE Stream Today [https://www.courttv.com/] Join the Investigation Newsletter [https://www.courttv.com/email/] Court TV Podcast [https://www.courttv.com/podcast/]Join the Court TV Community to get access to perks: [https://www.youtube.com/channel/UCo5E9pEhK_9kWG7-5HHcyRg/join]FOLLOW THE CASE: Facebook [https://www.facebook.com/courttv]Twitter/X [https://twitter.com/CourtTV]Instagram [https://www.instagram.com/courttvnetwork/]TikTok [https://www.tiktok.com/@courttvlive]YouTube [https://www.youtube.com/c/COURTTV]WATCH +140 FREE TRIALS IN THE COURT TV ARCHIVE [https://www.courttv.com/trials/]HOW TO FIND COURT TV [https://www.courttv.com/where-to-watch/]This episode of Closing Arguments Podcast was hosted by Vinnie Politan, produced by Kerry O'Connor and Robynn Love, and edited by Autumn Sewell. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The FBI boosts reward in the Nancy Guthrie search to $100,000. AjJudge blocks the Pentagon's bid to punish Arizona Senator Mark Kelly. Trump revokes the EPA's ability to regulate climate pollution. Democrats and the White House are still far apart on DHS funding deal. Plus, a Valentine's Day haunted house is offering weddings. Learn more about your ad choices. Visit podcastchoices.com/adchoices
HOUR 2: New details emerge as the search for Nancy Guthrie continues, 13 days in. full 2218 Fri, 13 Feb 2026 21:00:00 +0000 XT2l92KlsO0yuBs9PUmzndyaDgmMc2XY news The Dana & Parks Podcast news HOUR 2: New details emerge as the search for Nancy Guthrie continues, 13 days in. You wanted it... Now here it is! Listen to each hour of the Dana & Parks Show whenever and wherever you want! © 2025 Audacy, Inc. News False https://pla
send us a text via Fan Mail!Building a secure and trusting relationship with our teenagers is at the heart of this episode. I discuss practical information, credits most post-secondary institutions are looking for and supporting our teenagers in their decision making along the way. 1:45 - Laying the foundation 3:26 - It's okay if it's a confusing time 5:19 - Allowing our teens to make decisions 7:52 - Relationship is vital 12:50 - A typical course of study 18:54 - Unschooling and conventional transcripts As an Amazon Associate I earn from qualifying purchases.Hold On to Your Kids by Gordon Neufeld Revolution of Mercy by Bonnie Landry Unschooling to University by Judy ArnallGetting along: the foundation of successful homeschooling (podcast) maintain intimacy, maintain influence (blog post) homeschooling high school (blog post + additional resources) Contact On Instagram at @make.joy.normal By email at makejoynormal@gmail.com Search podcast episodes by topic www.bonnielandry.ca Shop my recommended resources Thanks for listening to Make Joy Normal Podcast!
Happy Valentine's Day Weekend! Need to Outperform Your Competitors in 2026? Favour Obasi-ike, MBA, MS delivers an insightful masterclass on outperforming your competition through applied and actionable SEO marketing tactics. The discussion covers the critical distinction between direct and indirect competitors, strategic approaches to competitive analysis using tools like SimilarWeb.com and SparkToro.com, and the importance of focusing on long-term performance over short-term rankings.Favour emphasizes the value of understanding customer intent, the difference between pre-purchase and post-purchase behavior, and how to leverage both Google search and social media platforms like Instagram for comprehensive market visibility. The session includes live Q&A with participants discussing real-world challenges in SEO strategy, website validation, and go-to-market approaches for startups in niche markets.Book SEO Services | Quick Links for Social Business>> Book SEO Services with Favour Obasi-ike>> Visit Work and PLAY Entertainment website to learn about our digital marketing services>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Subscribe to the We Don't PLAY Podcast>> Purchase Flaev Beatz Beats Online>> Favour Obasi-ike Quick LinksDetailed TimestampsIntroduction & Topic Overview00:00 - 02:02 - Opening: Outperform competitors with applied search everywhere optimization (SEO marketing tactics)02:02 - 03:10 - Understanding your competitors: National, international, local, and regional competitionDirect vs. Indirect Competitors03:10 - 04:46 - Defining direct and indirect competitors in your market04:46 - 06:17 - Market share dynamics and competitive positioningPractical Example: Flower Business Case Study06:17 - 09:13 - Using a Valentine's flower business as a practical example09:13 - 11:47 - Time-based pricing strategies and customer behavior patterns11:47 - 14:22 - Applying competitive insights to pricing and positioningSEO Strategy & Competitive Analysis14:22 - 17:35 - Understanding competitor strengths and weaknesses17:35 - 20:48 - Using competitive intelligence for content strategy20:48 - 23:19 - Keyword research and search intent analysisTools & Resources for Competitive Research23:19 - 25:42 - Introduction to SimilarWeb, SocialBlade, and SparkToro25:42 - 27:58 - Cost-effective alternatives for competitive analysis27:58 - 30:16 - Building long-term visibility through strategic toolsLive Q&A Session Begins•30:16 - 31:02 - Mohsen introduces himself: Software engineer starting a startup in the tattoo field31:02 - 32:34 - Question: How to approach SEO when there's no competition in your field?Google vs. Instagram Strategy Discussion32:34 - 35:05 - Why Google is the most unsaturated platform for search-based marketing35:05 - 37:15 - Instagram as a feed-based platform vs. Google as intent-based search37:15 - 40:30 - Pre-purchase vs. post-purchase intent: Amazon vs. YouTube analogyWebsite Validation & Trust Building40:30 - 43:12 - The importance of having a website for business credibility43:12 - 45:38 - Off-page SEO: Connecting Instagram to your website45:38 - 48:05 - Building relationship models across platformsAdvanced SEO Tactics48:05 - 50:21 - Running ads effectively: Brand awareness before advertising spend50:21 - 52:47 - Understanding audience targeting and customer journey mapping52:47 - 54:26 - Closing remarks and how to stay connected on ClubhouseFrequently Asked Questions (FAQs)1. What is the difference between direct and indirect competitors?Direct competitors are businesses that offer the same products or services within your niche or market. They target the same customer base and operate in similar ways. For example, if you sell red roses, other florists selling red roses are your direct competitors.Indirect competitors are businesses that offer different products or services but satisfy the same customer need or compete for the same market share. Using the flower example, supermarkets and farmer's markets selling flowers would be indirect competitors to a specialized florist.2. How do I find out who my competitors are?Favour recommends using several competitive analysis tools:SimilarWeb: For website traffic and audience insightsSocialBlade: For social media analytics and competitor trackingSparkToro: For audience intelligence and content discoveryYou can also identify competitors by searching for your target keywords on Google and seeing which businesses rank for those terms. Consider both national, international, local, and regional competitors depending on your market scope.3. Should I focus on Google or Instagram for my business?According to Favour, Google is the most unsaturated platform because it's based on search intent—people actively looking for specific solutions. Instagram is a feed-based platform better suited for brand awareness and showcasing visual results (before/after transformations, product demonstrations).Best approach: Use both strategically. Google captures pre-purchase intent (people researching solutions), while Instagram provides post-purchase validation and builds brand awareness. Having a website connected to your Instagram profile adds credibility and improves your off-page SEO.4. What's more important: ranking or performance?Favour emphasizes that performance is more important than ranking. Rankings fluctuate constantly (like stock prices or gas prices), but performance focuses on long-term outcomes:How quickly can you serve customers?What value do you provide beyond just appearing in search results?Can customers find your information when they need it?Anyone can rank with AI-generated content today, but what makes your business different is the experience, speed, and value you deliver to customers.5. How do I approach SEO if I have no competition in my field?When you're in a niche market with little to no competition, Favour suggests:Reverse engineer your success: If you're getting traction on Instagram, create corresponding website content (10 Instagram posts = 10 website articles)Focus on search volume: Research if there's search demand on Google for your servicesBuild credibility: Having a website validates your business more than social media aloneCreate content ecosystems: Connect your social media to your website through embedding posts and cross-linking6. Why is having a website important if I already have Instagram?A website provides business validation and credibility. As Favour's example illustrated: if three businesses offer the same service but only one has a website, customers will trust the one with a website because it demonstrates investment in human resources, infrastructure, and long-term commitment.Additionally, a website enables off-page SEO—when your Instagram links to your website, you're building relationship models between platforms that improve your overall search visibility.7. What is pre-purchase vs. post-purchase intent?Pre-purchase intent: Customers researching before buying (e.g., reading Amazon reviews, comparing products on Google)Post-purchase intent: Customers who already bought and need guidance (e.g., watching YouTube tutorials on how to use an air fryer they purchased)Understanding this distinction helps you create appropriate content for each stage of the customer journey. Google and review sites capture pre-purchase intent, while platforms like YouTube and Instagram serve post-purchase needs.8. Should I run ads if people can't find my business organically?Favour advises: Don't run ads first if people can't find you organically. If the answer to "Will they find my business without ads?" is no, then focus on building organic visibility first through SEO and content creation.If people can already find you organically, then running ads becomes more cost-effective because you're amplifying existing brand awareness rather than starting from zero.9. What are applied SEO marketing tactics?Applied SEO refers to search everywhere optimization—not just optimizing for Google, but creating a comprehensive presence across all platforms where customers might search:Google searchInstagram searchYouTube searchSocial media platformsReview sitesLocal directoriesIt's about understanding customer behavior across multiple touchpoints and ensuring your business is discoverable wherever customers are looking.Additional Resources MentionedSimilarWeb: Competitive website analyticsSocialBlade: Social media statistics and trackingSparkToro: Audience research and insightsChatGPT: AI content generation tool (mentioned in context of ranking vs. performance)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
G2's data reveals enterprise buyers increasingly rely on AI-powered search for software decisions. Tim Sanders, Chief Innovation Officer at G2, oversees insights from over 100 million annual software buyers and has identified critical optimization strategies for AI discovery. Sanders shares how markdown-formatted key takeaways at page tops dramatically improve AI crawling and training inclusion, plus why transparent pricing pages reduce model confusion and improve expected value calculations for enterprise buyers.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Here's a taster of our new Premium-only story. To hear it in full, please join our Premium Subscription service. Become a PREMIUM SubscriberYou can now enjoy Animal Tales by becoming a Premium Subscriber. This gets you:All episodes in our catalogue advert freeBonus Premium-only episodes (every Friday) which will never be used on the main podcastWe guarantee to use one of your animal suggestions in a storyYou can sign up through Apple Podcasts or through Supercast and there are both monthly and yearly plans available. You can find more Animal Tales at https://www.spreaker.com/show/animal-tales-the-kids-story-podcastA Note About The AdvertsIn order to allow us to make these stories we offer a premium subscription and run adverts. The adverts are not chosen by us, but played automatically depending on the platform you listen through (Apple Podcasts, Spotify, etc) and the country you live in. The adverts may even be different if you listen to the story twice.We have had a handful of instances where an advert has played that is not suitable for a family audience, despite the podcast clearly being labelled for children. If you're concerned about an advert you hear, please contact the platform you are listening to directly. Spotify, in particular, has proven problematic in the past, for both inappropriate adverts and the volume at which the adverts play. If you find this happening, please let Spotify know via their Facebook customer care page. As creators, we want your child's experience to be a pleasurable one. Running adverts is necessary to allow us to operate, but please do consider the premium subscription service as an alternative – it's advert free.
The latest on the search for Nancy Guthrie, the 84-year-old mother of Today Show co-anchor Savannah Guthrie. The FBI releases video of a masked man on Nancy's doorstep the night she went missing. In Georgia, a man is on trial for the 2001 murder of a law student. His defense attorney has tough questions for the victim's boyfriend. In Dateline Round Up, a courtroom outburst from Luigi Mangione, and Alex Murdaugh appeals his case. Plus, a lookback at the attack on Olympic figure skater Nancy Kerrigan.Nancy Guthrie Tipline: 1-800-CALL-FBI (1-800-225-5324)Nancy Guthrie images: https://www.fbi.gov/wanted/kidnap/nancy-guthrieFind out more about the cases covered each week here: www.datelinetruecrimeweekly.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Bob is joined by retired NYPD Sargent Joe Giacalone to discuss the details of the disappearance of Nancy Guthrie See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Arizona law enforcement officials say they've received thousands of calls regarding the disappearance of Nancy Guthrie. The calls came after the FBI released doorbell camera footage from Guthrie's front door that was taken the morning she disappeared. Learn more about your ad choices. Visit podcastchoices.com/adchoices
D&P Highlight: More items found in search of Nancy Guthrie. Whether those items are connected is unknown. full 699 Thu, 12 Feb 2026 19:58:00 +0000 eC0dlCSOPIVTnxWqxOGRTe372NNy2YAe news The Dana & Parks Podcast news D&P Highlight: More items found in search of Nancy Guthrie. Whether those items are connected is unknown. You wanted it... Now here it is! Listen to each hour of the Dana & Parks Show whenever and wherever you want! © 2025 Audacy, Inc. News False
The latest overnight updates on the investigation as the search for Nancy Guthrie enters its 12th day. Also, Attorney General Pam Bondi clashes with Democrats during House Judiciary Committee testimony on the Trump administration's handling of the Epstein case files. Plus, the biggest results and moments from the Olympics, with some of Team USA's top stars in action. And, remembering the life and legacy of James Van Der Beek. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On this episode of the Orange and Brown Talk podcast, Mary Kay Cabot, Ashley Bastock, and Dan Labbe dive into the biggest question facing the Browns this offseason: Who will be their quarterback? The conversation kicks off with a deep dive into the most intriguing external option, Malik Willis. The crew discusses if his dual-threat skill set is the perfect match for new head coach Todd Monken's system, but they also weigh the significant financial commitment and risk involved with his limited playing time. With other teams like the Miami Dolphins reportedly interested, will the Browns even have a shot? Then they pivot to the internal options, debating whether the team is better off exploring what they have in Shedeur Sanders or the recovering Deshaun Watson, who has recently started throwing again. Could a new offensive system finally unlock the elite player Watson once was? They also touch on veteran free agents like Kirk Cousins and Derek Carr and analyze the front office's philosophy of turning over every stone to find the right answer at the most important position in football. Follow us: On X: https://x.com/orangebrowntalk YouTube: https://www.youtube.com/@ClevelandBrownsonclevelandcom Instagram: https://www.instagram.com/orangeandbrowntalk/ Music credits: Ice Flow by Kevin MacLeod Link: https://incompetech.filmmusic.io/song/3898-ice-flow License: https://filmmusic.io/standard-license Learn more about your ad choices. Visit megaphone.fm/adchoices
Join Dave and Tom as they engage in an in-depth, verse-by-verse examination of the Gospel of John. We hope you will be challenged and convicted as you listen to these insightful, exegetical discussions compiled from nearly four years of Search the Scriptures Daily radio programs. Open your Bible and get ready for an edifying pilgrimage into God's Word.
Send a textAwareness about barefoot running reached a crescendo with the publication of the book Born to Run about the Tarahumara Indians in Mexico who came to run the Leadville Trail 100 in 1992 and 1994. Steven Sashen and his wife Lena started a company that produced kits that allowed buyers to assemble their own huarache-style sandals that the Tarahumaras wore. This evolved into them designing shoes for running, court sports, and other training that has become the popular brand Xero Shoes that spurned a Shark Tank offer and had sales of $48 million in 2022. Their shoes are designed with a wider toe box and a zero drop, meaning the heel is not lifted at all, allowing the foot to function more as nature intended us to move, thus strengthening the foot and reducing injuries. Steven, a one-time standup comic, entertainingly explains how the built-up shoes that are common in the footwear industry have actually created more problems to our bodies. In addition, those shoes' foam cushioning begins breaking down from the first use, necessitating their replacement within a short time. Xero shoes on the other hand have a 5000-mile sole warranty using FeelTrue rubber they have developed themselves. You'll learn a lot here about the human science that goes into Xero shoes. Steven himself is a masters track sprinter, and shares many anecdotes about adult track competition, as well as many terrific helpful lessons about the business world as a whole.Steven Sashenxeroshoes.comFacebook and Instagram @xeroshoesPodcast The Movement Movementjointhemovementmovement.comBill Stahlsilly_billy@msn.comFacebook Bill StahlInstagram and Threads @stahlor and @we_are_superman_podcastYouTube We Are Superman PodcastSubscribe to the We Are Superman Newsletter!https://mailchi.mp/dab62cfc01f8/newsletter-signupSubscribe to our Substack for my archive of articles of coaching tips developed from my more than three decades of experience, wild and funny stories from my long coaching career, the wit and wisdom of David, and highlights of some of the best WASP episodes from the past that I feel are worthwhile giving another listen.Search either We Are Superman Podcast or @billstahl8Register for the American Heroes Run: https://ultrasignup.com/register.aspx?did=133138Ride to End ALZ Coloradowww.alz.org/rideco
Send a textIn this episode of the Cops and Writers Podcast bonus series, retired Milwaukee Police Sergeant Patrick O'Donnell reads Chapter 28, "Wow! That's Realistic!" from his upcoming book:Police Stories: The Rookie Years - True Crime, Chaos, & Life as a Big City CopIt's 2:00 AM at District Five. Patrick and his partner Rachel are processing an arrest when a bloodcurdling scream echoes from the front lobby: "Help! He's been shot!"What they find is a dead body with a gunshot wound to the head, a grieving girlfriend, and—behind the victim—District Five's crime prevention display: a casket surrounded by yellow crime scene tape for Police Week.Hours later, after the body is removed, but the blood remains, the captain walks through and delivers the perfect line: "Damn, that is one realistic crime scene display!"All stories are real. Names and locations have been changed where necessary.
White House border czar Tom Homan said on Feb. 12 that a significant drawdown of immigration enforcement agents in Minnesota is underway. He proposed that the surge there should end.Homan touted de-escalation efforts and cooperation between state and local officials and federal immigration agents. Homan and a small number of federal agents will remain on the ground to transition "full command and control back to the field office."Federal and local law enforcement officers are continuing the search for Nancy Guthrie, mother of "Today" show host Savannah Guthrie. Authorities say they believe she was taken against her will after last being seen at her home on Jan. 31.FBI agents combed the desert near Guthrie's Tucson-area home on Feb. 11. They also knocked on doors and searched through bushes and boulders in the neighborhood.Authorities say several hundred detectives and agents are now assigned to the case, which has captured national attention.
This podcast episode provides a comprehensive overview of technical SEO, emphasizing its critical role in any successful digital strategy for 2026. Favour Obasi-ike, MBA, MS delves into the core components of technical SEO, including Core Web Vitals, mobile optimization, and the detrimental impact of crawlability issues and broken links.This episode also highlights the significant growth of the SEO services market, projected to reach nearly $150 billion by 2031. You will gain valuable insights into the importance of a technically sound website for improving user engagement, search engine rankings, and overall online visibility. Favour also shares information about relevant technical SEO courses and resources.Purchase all your Free and Paid Technical SEO Courses available in 2026 here >>Podcast Episode Timestamps[00:00 - 00:10] Introduction: Technical SEO Courses and Stats for 2026[02:57 - 03:45] What is Technical SEO and Why is it Important?[03:45 - 04:27] The Importance of Website Speed and Performance[04:27 - 05:21] Global SEO Services Market Size and Growth Projections[05:51 - 07:19] Understanding Core Web Vitals and Their Impact on User Engagement[07:19 - 08:42] The Significance of Mobile Optimization for SEO[08:50 - 12:06] Crawlability, Broken Links, and Their Effect on Search RankingsFAQs for Technical SEOWhat is technical SEO?Technical SEO refers to the process of optimizing the technical aspects of a website to improve its ranking in search engines. It focuses on making a website faster, easier to crawl for search engine bots, and more understandable for search engines. This includes optimizing website speed, mobile-friendliness, site structure, and ensuring there are no broken links or crawl errors.Why is technical SEO important for my website in 2026?Technical SEO is crucial for your website's success in 2026 because it directly impacts your search engine rankings and user experience. With the increasing competition online, having a technically sound website is no longer a niche specialization but a fundamental requirement. A well-optimized website will have better visibility on search engines, leading to more organic traffic, higher user engagement, and ultimately, more conversions.What are Core Web Vitals?Core Web Vitals are a set of specific factors that Google considers important in a webpage's overall user experience. They consist of three main metrics: Largest Contentful Paint (LCP), which measures loading performance; First Input Delay (FID), which measures interactivity; and Cumulative Layout Shift (CLS), which measures visual stability. Websites that meet Core Web Vitals standards can experience a significant increase in user engagement. Read more about Technical SEO from Google DocumentationHow does mobile optimization affect SEO?With over 60% of global website traffic coming from mobile devices, mobile optimization is a critical factor for SEO. Search engines like Google prioritize mobile-friendly websites in their rankings. A website that is optimized for mobile will provide a better user experience for mobile users, leading to higher engagement, lower bounce rates, and a greater likelihood of ranking on the first page of search results.Where can I find the best technical SEO courses?There are numerous free and paid technical SEO courses available online. Some popular platforms for finding high-quality courses include Coursera, Udemy, and the Google Digital Garage. It's recommended to look for courses that are up-to-date with the latest SEO trends and best practices for 2026.Book SEO Services | Quick Links for Social Business>> Book SEO Services with Favour Obasi-ike>> Visit Work and PLAY Entertainment website to learn about our digital marketing services>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Subscribe to the We Don't PLAY Podcast>> Purchase Flaev Beatz Beats Online>> Favour Obasi-ike Quick LinksSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Are you overly critical of yourself? Perhaps of others? And how do you respond to negative, critical people? In this episode, we discuss how to deal with a critical spirit – whether you are critical of yourself or others, or you have to deal with judgmental people. We discuss the misconceptions around Jesus' command "do not judge" and how a critical spirit can show up in our lives and relationships. After discussing the emotional and relational damage a critical spirit can cause in our lives, we share practical, biblical steps for breaking free from self-criticism and judgmental attitudes, and for handling negative people. Subscribe to the podcast and tune in each week as Haley and Dustin share with you what the Bible says about real-life issues with compassion, warmth, and wit. So you have every reason for hope, for every challenge in life. Because hope means everything. Hope Talks is a podcast of the ministry of Hope for the Heart. Listen in to learn more [00:08:06] What a “Critical Spirit” Really Is [00:15:00] Heart, Habit, or Hurt? Three Roots of Criticism [00:19:30] How a Critical Spirit Wrecks Relationships and Joy [00:27:59] From Self-Hatred to Grace: Seeing Yourself as God Does [00:40:30] Choosing Gratitude and Overlooking Offenses Hope for the Heart resources Book – Critical Spirit: https://www.hopefortheheart.org/store/product/critical-spirit Order our newest resource, The Care and Counsel Handbook, providing biblical guidance 100 real-life issues: https://resource.hopefortheheart.org/care-and-counsel-handbook Other Hope for the Heart Resources Facebook: https://www.facebook.com/hopefortheheart Instagram: https://www.instagram.com/hopefortheheart Want to talk with June Hunt on Hope in the Night about a difficult life issue? Schedule a time here: https://resource.hopefortheheart.org/talk-with-june-hope-in-the-night God's plan for you: https://www.hopefortheheart.org/gods-plan-for-you/ Give to the ministry of Hope for the Heart: https://raisedonors.com/hopefortheheart/givehope?sc=HTPDON ---------------------------- Bible verses mentioned in this episode Matthew 7:1–5 -- “Do not judge, or you too will be judged. For in the same way you judge others, you will be judged, and with the measure you use, it will be measured to you. “Why do you look at the speck of sawdust in your brother's eye and pay no attention to the plank in your own eye? How can you say to your brother, ‘Let me take the speck out of your eye,' when all the time there is a plank in your own eye? You hypocrite, first take the plank out of your own eye, and then you will see clearly to remove the speck from your brother's eye. Psalm 139:23-24 – Search me, God, and know my heart; test me and know my anxious thoughts. See if there is any offensive way in me, and lead me in the way everlasting. Proverbs 15:1 – “A gentle answer turns away wrath, but a harsh word stirs up anger. Proverbs 19:11 -- Good sense makes one slow to anger, and it is his glory to overlook an offense. 1 Corinthians 13:5 – “love keeps no record of wrongs.” Proverbs 13:18 - If you ignore criticism, you will end in poverty and disgrace; if you accept correction, you will be honored.
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
Enterprise buyers research software through AI 13 times more than traditional search. Tim Sanders, Chief Innovation Officer at G2, oversees buyer behavior insights from 100 million annual software purchasers and has identified critical optimization gaps in AI-driven discovery. The discussion covers markdown key takeaway optimization for AI crawling inclusion, pricing page transparency strategies that reduce model confusion, and expected value frameworks for balancing negotiation leverage against AI comprehension requirements.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The latest on the search for Nancy Guthrie, the 84-year-old mother of Today Show co-anchor Savannah Guthrie. The FBI releases video of a masked man on Nancy's doorstep the night she went missing. In Georgia, a man is on trial for the 2001 murder of a law student. His defense attorney has tough questions for the victim's boyfriend. In Dateline Round Up, a courtroom outburst from Luigi Mangione, and Alex Murdaugh appeals his case. Plus, a lookback at the attack on Olympic figure skater Nancy Kerrigan.Nancy Guthrie Tipline: 1-800-CALL-FBI (1-800-225-5324)Nancy Guthrie images: https://www.fbi.gov/wanted/kidnap/nancy-guthrieFind out more about the cases covered each week here: www.datelinetruecrimeweekly.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The world famous Bottom of the Stream Movie Show is back for another episode! We take on killer invisible squid this time, as we watch Island Zero, a 2018 horror flick directed by Josh Gerritsen. Listen on to hear what we made of this one and discover if there is anything even fishier going on behind the scenes! Bottom of the stream is a weekly podcast, hosted by film lovers Adam and Nick, exploring the parts of Netflix that most people don't go to in a bid to find out what hidden gems are lurking down there Every week we rank the films we watch against each other and place them in what we like to call THE STREAM TABLE which can be found on our website www.bottomofthestream.com Follow us on TikTok, Instagram and Letterboxed at @bots_podcast Search for Bottom of the Stream on youtube to stay up to date with our Monday show where we discuss the latest goings on at Netflix and the world of Streaming Please consider supporting the show on Patreon, If you do we will give you lots of bonus content including early access to the episodes. Check it out over at www.patreon.com/bottomofthestream We also now have a discord so join us to hang out https://discord.gg/wJ3Bfqt
Search is decaying, attention is fragmented and AI is rewriting the rules faster than most teams can update their decks. If you're still planning content like it's 2019, you're already invisible.In this episode, Kyle Denhoff, Sr Director of Marketing at HubSpot, pulls the curtain back on what happens after the inbound era. We get brutally honest about why channel-first thinking is dead, how HubSpot rebuilt itself as a media company inside a SaaS giant, and why “always-on” isn't a buzzword—it's survival. Kyle also breaks down how AI is actually being used behind the scenes (no, not to replace marketers), and why taste, editorial judgment, and distribution matter more than ever in a world flooded with machine-made content.We also explore:Why “more content” is the fastest way to lose relevanceHow audience-first strategy replaces blogs, funnels, and campaign calendarsThe real way HubSpot uses AI to drive conversion—without killing the brandWhy creators and practitioners now beat brands in buyer trustThe uncomfortable truth: nobody has the playbook, and pretending you do is the riskThis is a reality check for B2B teams still clinging to templates while the ground shifts under them.
FBI agents have discovered a black glove along a roadside close to the home of Nancy Guthrie, an important lead in the ongoing investigation into the masked assailant believed to have kidnapped the 84-year-old woman, reports indicate.See omnystudio.com/listener for privacy information.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
In between all the descriptions of trees and Arthur lustily wailing, there's some really interesting stuff -- song callbacks, Merlin's "gap year", Mary Stewart's version of known characters ... and then some real weird descriptions of women.Next episode: The Hollow Hills, Book 2: The Search, Sections 1 - 5Get more of Brett Parnell's music at bearinabarnnyc.comMore from Heeral Chhibber at heeral.orgGet merch: tar-valon-or-bust.printify.me/products and northingtron.redbubble.com Hosted on Acast. See acast.com/privacy for more information.
Search for Savannah Guthrie's mother continues with 18,000-plus calls pouring in; GA loses 209,000 ACA enrollees amid Medicaid debate; New Seasons workers win landmark union contract in OR; Experts: KY pork plant settlement wouldn't protect environmental health.
Join us, diehard cinephiles, for the third in our new series about Car Movies, wherein we wax poetic about the uncut gem Days of Thunder, starring Tom Cruise and Robert Duvall. Alex, Randy, Beck, and Tyler discuss The Randy and Alex Film Festival; the movie's (completely stacked) intro credits; behind-the-scenes drama; Cary Elwes as Iceman; Alex's deep disappointment at the film's 37% Rotten Tomatoes score; some of the film's many profoundly unlikely scenarios; drinkin' weird stuff and Bud heavies in a moving car hauler; and a surprising number of personal health jokes, but not to do with the scenes you're thinking of (which have perhaps not aged as well as some other parts of the movie);Also covered: the gritty and impressive race cinematography; the real BAT.com; the best '90s movie parking lot; the formative way in which several of our crew learned about drafting; and, of course, the Cruise Run. Tyler reveals himself as easy to please, while Beck's godfather does not. Finally, we get a late (and exceedingly rare) name drop from our Mr. Nonnenberg. Mentioned in this episode:12:36 Ex–Tom Cruise 1984 Nissan 300ZX Race Car33:48 RoW 1985 Porsche 928S 5-Speed Strosek37:19 Search results for Monte Carlo SS Aero CoupeGot suggestions for our next guest from the BaT community, One Year Garage episode, or (B)aT the Movies subject? Let us know in the comments below!
If you've ever started strong… felt motivated… made the plan… and then “fallen off” — this episode is for you.Because what you've been calling self-sabotage might not be sabotage at all.In this episode of Goddess Got Goals, Lisa explores why falling off the plan is rarely a willpower problem and almost never a character flaw. Instead, it's often a nervous system response to pressure, rigidity, and plans that don't account for energy, hormones, ADHD wiring, or emotional load.You'll learn:• Why most “self-sabotage” is actually self-protection• What's happening biologically when motivation suddenly drops• How perimenopause lowers stress tolerance and changes the rules• Why ADHD brains struggle with rigid, streak-based plans• The hidden reason “just get back on track” makes it worse• How to replace rigid plans with anchors, rhythm, and self-trustThis episode is an invitation to stop shaming yourself for falling off and start building something you can always return to.Because the goal isn't never falling off.The goal is creating a way forward that feels safe enough to sustain.✨ Links mentioned in this episode:• The Quickening — a nervous system reset & re-entry point• Warrior Goddess Archetype Quiz• UnarmouredEnter the Warrior Goddess Saga for your next step:https://thewarriorgoddesssaga.comBe sure to connect with Lisa Barwise and Warrior Goddess Kettlebell Training on social media: Instagram @lisa_barwise @wgkettlebelltraining Facebook www.facebook.com/warriorgoddesskettlebelltrainingYoutubehttps://www.youtube.com/warriorgoddesskettlebelltraining What you can do to help the Podcast? If this podcast means anything to you and you want to support it. Simply Subscribe & Review in Apple Podcasts.Apple Podcasts is one of the only platforms where you can both subscribe and review.How to Subscribe or Follow The Podcast1. Open Apple Podcast App.2. Go to the icons at the bottom of the screen and choose “search3. Search for “Goddess Got Goals”4. Hit the top Right Hand "+" sign5. Open Spotify 6. Search for “Goddess Got Goals”7. Hit the 'Follow' underneath the image How to Leave a Podcast Review Open Apple Podcast App. Go to the icons at the bottom of the screen and choose “search” Search for “Goddess Got Goals” Click on the SHOW, not the episode. Scroll all the way down to “Ratings and Reviews” Click on “Write a Review” This is the best way for us to reach more people and of course let us know that our episodes mean something to you!...
If you're tired of chasing "flash in the pan" tactics that promise overnight results, this episode is your reality check. In this episode of Pipe Dream, host Jason Bradwell sits down with Dev Basu, CEO of Powered by Search, to unpack how to build an inbound-only growth motion that actually compounds over time instead of burning out your team and budget. Dev's core point is clear: stop creating remixable AI content and start building lived-experience content that creates goodwill as a moat. The marketers winning today aren't the ones doing more, they're the ones doing the simple things better and measuring what actually matters. For 16 years, Dev has helped VPs of marketing and CMOs at B2B SaaS companies build predictable pipeline without cold outreach. His approach targets two groups: the 5% in-market demand actively looking for solutions, and the 45% of right-fit customers who don't wake up thinking they need your software but would benefit from it. Dev walks through Powered by Search's playbook, which drives more than half their inbound leads through LinkedIn alone. His SAGE framework (Simple, Actionable, Goal-oriented, Easy to consume) focuses on publishing content about how they've done something, not generic how-to advice. This lived-experience approach can't be copied through ChatGPT or Claude, building genuine goodwill that compounds over time. The conversation breaks down the "do more, do better, do new" framework. Most companies don't need revolutionary tactics, they need to optimise existing channels ruthlessly. AI plays a role, but it's about speed, not strategy. Dev uses AI to accelerate production once they know what good looks like, not to figure out what to say. Then Dev drops the tactical goldmine: the 3x10 rule. Get 10% more right-fit traffic, reduce acquisition cost by 10%, and increase average contract value by 10%. When you stack these three improvements, they compound to roughly 30% more pipeline. He guarantees this in 90 days and explains exactly how, from internal linking to push pages onto page one of Google, to cutting wasted ad spend, to targeting slightly larger companies with higher willingness to pay. If you want a blueprint for building predictable B2B SaaS demand generation without the hype, this conversation delivers. Chapter Markers 00:00 - Introduction: Dev Basu and the inbound-only motion 01:00 - The 5% in-market demand vs 45% right-fit customers 02:00 - Eating your own dog food: How Powered by Search acquires clients 03:00 - The problem with flash in the pan tactics and LinkedIn slop 04:00 - SAGE content framework: Building goodwill as a moat 05:00 - Triangulating attribution to prove LinkedIn drives half the pipeline 06:00 - Lived-experience content you can't remix with AI 08:00 - The playbook: Five pillars of demand generation 13:00 - Do more, do better, do new: The framework for prioritisation 16:00 - Using AI for speed, not strategy 20:00 - Buyer psychology and why nobody wants to "get a demo" 22:00 - The 3x10 rule: 30% more pipeline in 90 days 23:00 - Getting 10% more traffic with simple internal linking 24:00 - Cutting wasted ad spend to reduce CAC by 10% 25:00 - Moving upmarket slightly to increase ACV by 10% 26:00 - The Grand Slam offer and guarantee 27:00 - Where to learn more about Powered by Search Useful Links Connect with Jason Bradwell on LinkedIn Connect with Dev Basu on LinkedIn Learn more about Dev Basu Explore Powered by Search and the Grand Slam Offer Check out Clay for enrichment Explore B2B Better website and the Pipe Dream podcast
Bob is joined by retired NYPD Sargent Joe Giacalone to discuss the details of the disappearance of Nancy Guthrie See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
On a snowy Black Friday in November 2024, a 22-year-old experienced hiker from Quebec set out alone to climb one of the most remote peaks in New York's Adirondack Mountains—a challenging 18-mile journey he expected to complete in a single day. When he didn't return as planned, what followed was one of the most extensive search operations in Adirondack history, involving dozens of elite forest rangers battling brutal winter conditions for over a week. This is the story of Leo Dufour, a university student studying to become a teacher who had already conquered 32 of the legendary 46 High Peaks, and the extraordinary efforts to find him in a wilderness that doesn't always give up its secrets. It's a reminder that even the most prepared among us are never more than one wrong turn away from the unforgiving power of the mountains. 00:00 Introduction to Disaster Strikes 00:42 Leo Dufour's Quest in the Adirondacks 03:00 The Challenge of Allen Mountain 07:09 The Search and Rescue Efforts 12:47 The Aftermath and Lessons Learned 24:37 Conclusion and Final Thoughts Listen AD FREE: Support our podcast at patreaon: http://patreon.com/TheCruxTrueSurvivalPodcast Email us! thecruxsurvival@gmail.com Instagram https://www.instagram.com/thecruxpodcast/ Get schooled by Julie in outdoor wilderness medicine! https://www.headwatersfieldmedicine.com/ References: "New York State Department of Environmental Conservation Statement on Recovery of Missing Canadian Hiker Leo DuFour." DEC Press Release, May 10, 2025. "Update: State Police seeking the public's assistance in locating a missing hiker in the town of Newcomb." NYS Police Press Release, December 2024. Lynch, Mike. "Remains of missing Canadian hiker found." Adirondack Explorer, May 2025. Lynch, Mike. "Missing hiker: What we know so far, as search enters 5th day." Adirondack Explorer, January 22, 2025. Lynch, Mike. "Search for Canadian hiker shifts to recovery." Adirondack Explorer, March 28, 2025. "DEC: Body of missing hiker Leo DuFour found May 10 off Mt. Allen Mountain trail." The Adirondack Almanack, May 12, 2025. "Due to treacherous conditions, search for Leo DuFour transitioned to recovery mission." The Adirondack Almanack, December 10, 2024. "Extensive search underway in the Adirondacks for missing Canadian hiker." NCPR News, December 4, 2024. "Rangers had to divert resources during Allen Mt. search to rescue solo searcher." NCPR News, December 10, 2024. "DEC: No signs of missing hiker Thursday." Adirondack Daily Enterprise, December 5, 2024. "Hikers find body of missing person on Allen Mountain." My NBC5, May 2025. "Body of Missing Hiker Is Found 5 Months After He Vanished in the Adirondacks." The New York Times, May 29, 2025. The Globe and Mail (Canada): "U.S. authorities find body of missing Quebec hiker in New York state's Adirondacks." May 11, 2025. Advnture: Clarke, Julia. "Body of 22-year-old Canadian hiker found 5 months after vanishing on snowy Adirondacks mountain." May 2025. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, we normalize the conversations around all things PCOS from symptoms, to diagnoses, and advocating for others with Megan Stewart. About Megan: Megan Stewart is the Founder and Executive Director of the PCOS Awareness Association (PCOSAA), a 501(c)(3) nonprofit dedicated to educating, supporting, and empowering those affected by Polycystic Ovarian Syndrome (PCOS). Since 2012, she has led PCOSAA to become a global leader in PCOS advocacy and patient empowerment. Under her guidance, PCOSAA has launched transformative initiatives including PCOS CON, Shades of Teal Membership, Men of Teal, and the forthcoming Search for a PCOS Specialist platform. The organization also partners with Lujan Labs of Cornell University and proudly contributes to Dr. Jill Biden's Women's Health Initiative, advancing national efforts for equitable women's health research and awareness. A woman living with PCOS herself, Megan channels her experience into action — driving PCOSAA's mission to educate, empower, and elevate every voice impacted by PCOS through compassion, innovation, and advocacy.
90s nostalgia is everywhere right now, and it's not a random coincidence.In this episode, I explore the documentary In Search of Darkness 1995-1999 (2026) and the psychology of nostalgia. I talk about:How we define nostalgiaThe mental health benefits of nostalgiaHow nostalgia is particularly beneficial for those suffering with dementia or cognitive declineWhy we cling to nostalgia in times of change or uncertaintyWhen we need to be careful about over-indulging in nostalgiaThe three of cups and how this tarot card evokes nostalgic feelings for meMental Health is Horrifying is hosted by Candis Green, Registered Psychotherapist and owner of Many Moons Therapy...............................................................Show Notes:Want to work together? I offer 1:1 virtual psychotherapy for Ontario residents, along with tarot, horror, and dreamwork services (anywhere my bat signal reaches), both individually and through my group program, the Final Girls Club. Podcast artwork by Chloe Hurst at Contempo MintGet up to 20% Cozy Earth with promo code HORRIFYING. If you get a survey post-purchase, be sure to let them know Candis sent you! Get 20% off In Search of Darkness 1995-1999 with promo code HORRORFRIENDS26.Woods B, O'Philbin L, Farrell EM, Spector AE, Orrell M. Reminiscence therapy for dementia. Cochrane Database Syst Rev. 2018 Mar 1;3(3):CD001120. doi: 10.1002/14651858.CD001120.pub3. PMID: 29493789; PMCID: PMC6494367.Ismail S, Christopher G, Dodd E, Wildschut T, Sedikides C, Ingram TA, Jones RW, Noonan KA, Tingley D, Cheston R. Psychological and Mnemonic Benefits of Nostalgia for People with Dementia. J Alzheimers Dis. 2018;65(4):1327-1344. doi: 10.3233/JAD-180075. PMID: 30149444.
Send a textScott, Cardone and Novak discuss the Super Bowl and the predictable outcome that came to pass. We also discuss if this is the start or fizzling out of the Pats climb to the top and what it means for the rest of the NFL. Then we also air our Pet Peeves and then finish up with listener questions. Our website: www.angryfootballfans.com. Please check it out and subscribe to our pod.Download our podcast at Buzzsprout: https://www.buzzsprout.com/1358293Or wherever you get your podcasts. We are also now available on YouTube. Search for Three Angry Giants fans and subscribe to our channel.
Search for Savannah Guthrie's mother continues with 18,000-plus calls pouring in; GA loses 209,000 ACA enrollees amid Medicaid debate; New Seasons workers win landmark union contract in OR; Experts: KY pork plant settlement wouldn't protect environmental health.
Host Scott Hennen returns with a heavy, fast-moving Tuesday edition that shifts from a glowing review of the new Melania movie to the grim reality of local and national tragedies. The episode centers on two disturbing disappearances: the high-profile kidnapping of Savannah Guthrie's mother and a heartbreaking local murder investigation in Fargo that has authorities searching landfills and rural properties for human remains. Between the true crime updates, Scott sits down for a civil but intense discussion with local activists whose "Red Hat" protest—inspired by WWII Norwegian resistance—has sparked a firestorm of debate in the Red River Valley. Plus, we meet a YouTube-famous student farmer, look at the future of real estate education at UND, and learn why the El Paso airport just went into a 10-day lockdown. Episode Highlights [00:01:10] Melania: The Movie Review Scott shares his impressions of the private screening of Melania. Whether you're a fan or a critic, Scott argues the film offers a powerful glimpse into the First Lady's life and her successful career before meeting Donald Trump. [00:10:00] The Red Hat Resistance In a standout moment of civil discourse, Scott is joined by Cheryl Rosted and Ivan Thompson. They explain why they wear red hats to protest ICE and the Trump administration, while Scott challenges their comparisons to Nazi-occupied Norway. [00:26:45] The "Stolen Land" Debate The team reacts to student-led ICE protests at Davies High School. Scott sounds off on the "scary" reality of students getting news from social media and the controversial narrative regarding indigenous land. [00:32:15] The Search for Isadora Wengel A somber update on the disappearance of 25-year-old Isadora Wengel. Authorities have arrested her boyfriend for murder and are now asking the public to look for a specific 27-gallon black tote with a red lid. [00:44:10] The Franson Department of Real Estate Interim Dean Patrick O'Neill joins to discuss a historic naming at the University of North Dakota. Thanks to a legacy gift from Bob Franson, UND is launching a specialized program to train the next generation of property developers. [00:52:15] Money, Markets, and Metals Landmark Gold's David Fisher breaks down why gold is up 18% year-to-date and what China's "digital yuan" surge means for the future of the U.S. dollar.
No suspects have been identified in Nancy's kidnapping as of Wednesday, February 11, though FBI Director Kash Patel said they are investigating potential "persons of interest."Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
A father and his daughter go missing in Labette County, Kansas in late 1872. A neighbor searches for them, and he never comes home. His wife sounds the alarm. Search parties descend on the Bender farm. The Benders are long gone, but the victims of their murderous deeds remain. Local authorities hunt for the Bender clan, but justice proves elusive. Thanks to our sponsor, Quince! Use this link for Free Shipping and 365-day returns: Quince.com/lotow Join Black Barrel+ for ad-free episodes and bingeable seasons: blackbarrel.supportingcast.fm/join Apple users join Black Barrel+ for ad-free episodes, bingeable seasons and bonus episodes. Click the Black Barrel+ banner on Apple to get started with a 3-day free trial. For more details, visit our website www.blackbarrelmedia.com and check out our social media pages. We're @OldWestPodcast on Facebook, Instagram and Twitter. On YouTube, subscribe to LEGENDS+ for ad-free episodes and bingeable seasons: hit “Join” on the Legends YouTube homepage. Learn more about your ad choices. Visit megaphone.fm/adchoices
Updating the events of the day as we are into day 11 of the search for Nancy Guthrie.Become a supporter of this podcast: https://www.spreaker.com/podcast/pretty-lies-and-alibis--4447192/support.ALL MERCH 10% off with code Sherlock10 at checkout - NEW STYLES Donate: (Thank you for your support! Couldn't do what I love without all y'all) PayPal - paypal.com/paypalme/prettyliesandalibisVenmo - @prettyliesalibisBuy Me A Coffee - https://www.buymeacoffee.com/prettyliesrCash App- PrettyliesandalibisAll links: https://linktr.ee/prettyliesandalibisMerch: prettyliesandalibis.myshopify.comPatreon: https://www.patreon.com/PrettyLiesAndAlibis(Weekly lives and private message board)
On an April afternoon in 1964, a police officer's observation of an unusual, landed aircraft in New Mexico would become one of ufology's most baffling cold cases. Decades later, two women's unnerving encounter with a strange aircraft over a Texas highway would leave them suffering from health effects that led them to believe they had witnessed a U.S. government test gone awry. But could these two famous UFO cases have more in common than most would ever think? This week on The Micah Hanks Program, our examination of some of America's most controversial UFO cases leads us to questions about a supposed "UFO legacy program," and what such a program—if it exists—might entail. Could some of history's most well-documented UFO cases point to something the U.S. government knows far more about than it's letting on? Want to advertise/sponsor The Micah Hanks Program? We have partnered with the AdvertiseCast to handle our advertising/sponsorship requests. If you would like to advertise with The Micah Hanks Program, all you have to do is click the link below to get started: AdvertiseCast: Advertise with The Micah Hanks Program Show Notes Below are links to stories and other content featured in this episode: NEWS: Brad Arnold, lead singer of Grammy-nominated rock band 3 Doors Down, dies at 47 Maxwell invokes the Fifth Amendment at closed virtual House Oversight deposition Search for Savannah Guthrie's mother continues as detectives analyze ransom note Study of AI generated Neanderthal scenes reveals major gaps with modern archaeological research The Dying Children Who Suddenly Wake Up SOCORRO: Socorro Landing: A UFO Story - Visit Socorro New Mexico CASH-LANDRUM: UFO Incident Near Dayton, Texas, in December 1980 BECOME AN X SUBSCRIBER AND GET EVEN MORE GREAT PODCASTS AND MONTHLY SPECIALS FROM MICAH HANKS. Sign up today and get access to the entire back catalog of The Micah Hanks Program, as well as "classic" episodes, weekly "additional editions" of the subscriber-only X Podcast, the monthly Enigmas specials, and much more. Like us on Facebook Follow @MicahHanks on X. Keep up with Micah and his work at micahhanks.com.
Storm Paglia and Matt Vespa discuss the latest news of the day! From Ryan Routh getting life in prison, insanity among liberals continuing in Minnesota, Republicans slow-walking the SAVE Act, Scott Bessent owning Maxine Waters, and SuperBowl predictions, the guys have you covered!
G-Wagon, Top 10 Things Heard at a Trump Rally, The Search for Bob-O, JD and a will, Show Horses, more Cars, Johnny Manziel, Naked ladies on Vehicles, Day Light Savings, and used car scandals!
When Valentines become NIGHTMARES ■ Why is Kyrsten Sinema being SUED by her bodyguard's WIFE? ■ Adam & Eve – how it REALLY went down ■ Is UTAH the creepiest state? ■ Meet the new BONNIE & CLYDE