POPULARITY
Categories
President Trump attempted earlier today to distance himself from the dead pedophile and sex trafficker Jeffrey Epstein. A look at the revelations inside the Epstein files. Plus, an urgent search for the mother of Today Show anchor Savannah Guthrie. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Hey guys, Here is my complete method! Prep Plan-Room & scheduling map-Zoning & destination boxes List - get your task list Progress Nervous system regulation Breathing Somatic grounding Spatial grounding Routing objects Task list Process Clean up Take boxes to destinations Batch and schedule tasks Protect Protect your space Protect your energy Protect your capacity If you want to go deeper and have support decluttering your home consistently, the year-long program is open. You can find all the details at declutteryourchaos.com. ✨Come home to yourself. ✨ Head to Cozy Earth and use my code DECLUTTER for 20% off and experience the softest sheets you can find: https://cozyearth.com/ If this episode helped you, please leave a review or share it with someone who needs it. Looking forward to seeing your progress in the free Facebook group. To join click below... https://www.facebook.com/groups/declutteryourchaos/ Download my free decluttering planner here: https://declutteryourchaos.com/decluttering-planner Let's connect:
In this episode of Dark Side Divas we discuss the Star Wars - The Bad Batch episode "War Mantle" (s1e14). Clone Force 99 gets a distress call from Rex, who is going to ask for a favor. Meanwhile we met a "TK Trooper" for the first time, and we go back to Kamino. Listen to this episode to hear what the divas have to say!
From January 27, 2025: In a live conversation on January 23, Lawfare Editor-in-Chief Benjamin Wittes spoke to Lawfare Senior Editors Scott R. Anderson, Anna Bower, Quinta Jurecic, and Alan Rozenshtein and assistant law professor at Pace University Amelia Wilson about the first batch of executive orders by President Trump in his second term, including suspending enforcement of the TikTok ban, the use of the military at the border, the birthright citizenship order, and the legal challenges some of these orders are facing.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Alex hunts down the best and most sought after whiskeys released in 2025! E.H. Taylor BTAC, Russell's Reserve 15 Year, Jack Daniel's Single Barrel Special Release Tanyard HIll Rye, Bardstown Bourbon Company Distillery Reserve Hokkaido Mizunara Oak Barrel Finish, and Hill Farmstead Whistle Pig Rye 10 Year all compete for that coveted spot atop the Malt Couture Power Rankings. In the Beer News, the TTB gets served a lawsuit to allow meads, ciders, and fruit wines to display vintages on their labels and the world's oldest monastic brewery in Germany is sold. Thanks to Amory's Tomb Brewing Co. for sponsoring this episode. Visit their newly reopened tap room in Maynard, Massachusetts. Look for them at the New England Real Ale Exhibition from March 25-28 and at Widowmaker's Hopsmokerfest in April! Follow them on IG @AmorysTomb! To get involved with the "Life" International Barleywine Collab, click the link for info about the recipe, BSG discount, and links to help raise awareness of colon cancer. If you'd like to make a direct donation to help support Alex, head over to his GoFundMe. For more info about colon cancer and to help support the fight against it check out the Colon Cancer Foundation. Head to our Patreon for weekly exclusive content. Get the Malt Couture Officially Licensed T-shirt. Follow DontDrinkBeer on Instagram and Twitter
Food FAQ - Learn How to Cook: Cooking, Kitchen Tips, and Lots of Love
Stop paying twenty dollars for a tiny bottle of "artisanal" syrup when one ten-dollar bag of dried flowers makes enough mixer for forty people!
How does gut health affect MS – and what role can diet really play in supporting the immune system? In this episode of Living Well with MS, Geoff is joined by Overcoming MS Program Facilitator Karen Lee – retired intensive care nurse, nutritionist, author and recipe developer. Karen shares her MS journey and explains, in clear and accessible terms, how gut health, inflammation and diet are connected in autoimmune conditions such as MS. They explore dysbiosis and “leaky gut”, why fibre and the microbiome matter, and how whole-food plant-based eating fits within the Overcoming MS dietary recommendations. Karen also talks about her new book Healing from the Inside Out and shares practical, fatigue-friendly tips for eating more plants – without overwhelm. Watch this episode on YouTube. Keep reading for the topics, timestamps, and our guest's bio. 00:56 Welcome and introduction to Karen 01:49 Karen's MS journey: diagnosis, optic neuritis and early changes 04:46 From intensive care nursing to nutrition, writing and teaching 09:32 Understanding the Overcoming MS diet recommendations 10:36 Why diet matters for immune health: nutrients, fats, fibre and the microbiome 17:57 Dysbiosis explained – and how it relates to autoimmunity and MS 22:02 “Leaky gut”: what it means and why inflammation matters 23:16 Inside Karen's new book Healing from the Inside Out 26:46 How whole plant foods support overall health 29:50 Protein and plant-based diets: common concerns addressed 31:39 Practical tips for eating more plants and increasing variety 37:58 Favourite recipes, sauces and simple ways to add flavour 39:41 Batch cooking and freezing for low-energy days 41:26 Running the Taunton Half Marathon and fundraising for Overcoming MS Order Karen's latest book Healing from the Inside Out: Managing Autoimmunity with a Whole-Food Plant-Based Diet Support Karen's fundraiser for Overcoming MS Learn more about Karen's work New to Overcoming MS? Learn why lifestyle matters in MS - begin your journey at our 'Get started' page Connect with others following Overcoming MS on the Live Well Hub Visit the Overcoming MS website Follow us on social media: Facebook Instagram YouTube Pinterest Don't miss out: Subscribe to this podcast and never miss an episode. Listen to our archive of Living Well with MS here. Make sure you sign up to our newsletter to hear our latest tips and news about living a full and happy life with MS. Support us: If you enjoy this podcast and want to help us continue creating future podcasts, please leave a donation here. Feel free to share your comments and suggestions for future guests and episode topics by emailing podcast@overcomingms.org If you like Living Well with MS, please leave a 5-star review
Sexier Than A Squirrel: Dog Training That Gets Real Life Results
Send us a textWhat if changing what's on your plate could change how you feel, think, and even how you train your dog? We dive into a candid journey from a daunting fibroid diagnosis and surgery-first advice to a practical, food-first plan built around whole ingredients, simple prep, and flavour that sticks. Along the way, we talk about the 25% reduction that showed up on a scan, the meals that kept us going, and the mindset shifts that made healthy choices sustainable through packed training days.We get specific about what worked: ditching ultra‑processed foods in favour of vegetables, legumes, nuts, seeds, and natural fats; building quick wins like a 20‑minute lentil curry finished with lime; blending a blueberry‑coconut chia breakfast that sets up the morning; and keeping freezer-ready energy balls for the afternoon slump. Batch habits make the difference: slow‑cooker chilli loaded with greens, a soup maker that turns prepped veg bags into grab‑and‑go lunches, and simple hydration cues to separate thirst from hunger. For treats, we keep the joy without the crash—cauliflower nachos with guacamole, citrus‑coconut dessert bites, sweet potato fries, and a cashew‑based Caesar that tastes like the classic.Beyond recipes, we share why health upgrades translate to better dog training—more patience, cleaner timing, steadier energy, and clearer communication. Travel tips, sourcing strategies, and UK‑friendly healthy finds round out a plan that's realistic, affordable, and family‑proof, even for picky teens. If you've been on the fence about shifting your diet or wondered how to fuel long training days without relying on packets and powders, this is your blueprint.Ready to feel better and train better? Subscribe, share this with a friend who needs a nudge, and message us your first swap—what whole‑food habit are you starting this week?Support the showIf you're loving the podcast, you'll love our NEW Sexier than a Squirrel Dog Training Challenge even more! Get transformational dog training today for only £27!Want even more epic dog training fun and games and solutions to all your dog training struggles? Join us in the AbsoluteDogs Games Club!https://absolutedogs.me/gamesclub Want to take your learning to the next level? Jump into the games-based training membership for passionate dog owners and aspiring trainers that know they want more for themselves and their dog - Pro Dog Trainer Club! https://absolutedogs.me/prodogtrainerclub And while you're here, please leave a review for us and don't forget to hit share and post your biggest lightbulb moment! Remember, no matter what struggles you might be facing with your dog, there is always a game for that!
Can you steel an entire cantina? Not if it's owned by Cid! In this episode of Dark Side Divas we discuss the Star Wars - The Bad Batch episode "Infested" (s1e13). The Bad Batch return from a dangerous episode only to find out Cid is no where to be seen, and someone else is at her cantina. Why is Omega the voice of reason, again...in this episode? Listen to hear what the divas have to say.
Tommy & Josh are the co-owners of Watch Hill Proper located in Louisville, Kentucky. Watch Hill Proper is the largest American Whiskey bar in the world. The point of the American Whiskey Show is to have fun with whiskey and to share a little knowledge about it in the process. Grab a pour and join us on our journey. Episode 114: Larrikin Deep Purple Batch 4 www.watchhillproper.com
Today's blockchain and crypto news Bitcoin is up slightly at $88,599 Ethereum is up slightly at $2,936 And Binance Coin is up slightly at $876 Bloomberg says Trump family fortune increased by $1.4B thanks to crypto Winklevoss Twins donate ZEC to support Zcash Hong Kong plans first batch of stablecoins Learn more about your ad choices. Visit megaphone.fm/adchoices
You know you need to create more video content, but between overwhelm, procrastination and overthinking, you're struggling. Let me show you how I create a month of content in hours so you can steal my methods and get out of your own way. This episode dives into how much content is required, batching strategies and how to keep innovative ideas flowing. –I'll create a profitable profile for you in minutes. Click to attract high-paying clients. https://go.taelerdehaes.com/bio-surveyJoin our Fit Pro Business Secrets Made Simple group over on Facebook for exclusive resources, trainings and help as you're growing your online fitness business. https://www.facebook.com/groups/fitprobusinesssecrets/ Follow Taeler on Instagram. https://www.instagram.com/taelerfit/Learn more about working with Taeler, whether you're just starting your online coaching business or scaling to multi-6/7-figures. https://taelerdehaes.com/
This week on NPC, rumors swirl about the Steam Machine's pricing, AYANEO pauses to collect itself, GameSir's Pocket Taco goes live, the lack of foldable phone controllers, and our first videogames. Also available on YouTube here. Links and Show Notes The Latest Portable Gaming News Steam Machine pricing may have leaked Meta has closed three VR studios as part of its metaverse-focused layoffs GameSir Pocket Taco phone controller Kickstarter is now live AYANEO Pocket PLAY delayed AYN Thor gets major OTA update, Batch 2 shipping January 15, prices rising for Batch 3 Retrocade is coming to Apple Vision Pro A FEX announcement is coming Subscribe to NPC XL NPC XL is a weekly members-only version of NPC with extra content, available exclusively through our new Patreon for $5/month. Each week on NPC XL, Federico, Brendon, and John record a special segment or deep dive about a particular topic that is released alongside the "regular" NPC episodes. You can subscribe here: https://www.patreon.com/c/NextPortableConsole Leave Feedback for John, Federico, and Brendon NPC Feedback Form Credits Show Art: Brendon Bigley Music: Will LaPorte Follow Us Online On the Web MacStories.net Wavelengths.online Follow us on Mastodon NPC Federico John Brendon Follow us on Bluesky NPC MacStories Federico Viticci John Voorhees Brendon Bigley Affiliate Linking Policy
Do you avoid cleaning your house and then wonder when you last cleaned it? Do you struggle with an all-or-nothing mentality which prevents you from staying consistent?In this episode, we're talking about two ways to manage housework: batching tasks into big cleaning sessions and spreading small tasks into daily routines. Each approach has benefits and challenges, especially for ADHD brains and anyone who struggles with an all-or-nothing mindset.We'll explore why batching can help you use your energy and focus efficiently, but also why it can feel overwhelming or easy to skip. Daily routines, on the other hand, keep messes small and manageable, help habits stick, and require less motivation—but they can feel boring or easy to ignore if expectations are too high.This episode will help you understand how to use both batching and daily routines in a way that actually supports your home and your energy. You'll learn how to find a balance that reduces overwhelm, keeps your space manageable, and makes cleaning feel possible—even on chaotic days. If this episode blessed you, leave a review! Thank you so much! - XO COACHING Schedule a 15-Minute Consultation JOIN The Accountability Club FREE Daily Reset Checklist DO YOUR WILL @ Mama Bear Legal 20% Off with code: H&H20 MY FAVORITE PLANNER At-A-Glance Harmony Planner
Patreon: https://patreon.com/dadbatchpod email: dadbatchpod@gmail.com Subscribe to The Dad Batch on YouTube Get The Dad Batch merch: https://shop.thedadbatch.com Social media: instagram.com/dadbatchpod Follow the hosts on social media: instagram.com/stevie.kickz instagram.com/alphaignition instagram.com/sithing.aint.easy Instagram.com/tech.badbatch instagram.com/pabufrik instagram.com/leftcoastavenger
The Morning Rush gets to know the newest scholars under the Monster Scholarship Program Batch 24!Follow us on our socials: Facebook, X, Instagram, TikTokSubscribe to our YouTube channel for more content.
After spending a decade working for the Empire, John Perkins walked away from his life as an Economic Hitman and gave up the game in his transformative book “Confessions of an Economic Hitman”. With the third edition of the book now available, we explore the role of the new group of financial arsonists who have set their sights on Latin America.Will China continue the process of empire building that the United States began half a century ago, or does its plan for a new Silk Road reward cooperation and collaboration instead? With the majority of its mineral wealth locked down inside the ground, could China secure the resources that it covets in South America while also raising the standard of living for an entire continent? Not if the American Empire has anything to say about it.—Guest Links John Perkins - Confessions of an Economic Hit Manhttps://johnperkins.org/—Watch the video version on one of the Macroaggressions Channels:Rumble: https://rumble.com/c/MacroaggressionsYouTube: https://www.youtube.com/@MacroaggressionsPodcast—MACRO & Charlie Robinson LinksHypocrazy Audiobook: https://amzn.to/4aogwmsThe Octopus of Global Control Audiobook: https://amzn.to/3xu0rMmWebsite: www.Macroaggressions.ioMerch Store: https://macroaggressions.dashery.com/Link Tree: https://linktr.ee/macroaggressionspodcast—Activist Post FamilyActivist Post: www.ActivistPost.comNatural Blaze: www.NaturalBlaze.com—Support Our SponsorsAnarchapulco: https://anarchapulco.com/ | Promo Code: MACROC60 Power: https://go.shopc60.com/PBGRT/KMKS9/ | Promo Code: MACROChemical Free Body: https://chemicalfreebody.com/macro/ | Promo Code: MACROWise Wolf Gold & Silver: https://macroaggressions.gold/ | (800) 426-1836LegalShield: www.DontGetPushedAround.comEMP Shield: www.EMPShield.com | Promo Code: MACROGround Luxe Grounding Mats: https://groundluxe.com/MACROChristian Yordanov's Health Program: www.LiveLongerFormula.com/macroAbove Phone: https://abovephone.com/macro/Van Man: https://vanman.shop/?ref=MACRO | Promo Code: MACROThe Dollar Vigilante: https://dollarvigilante.spiffy.co/a/O3wCWenlXN/4471Nesa's Hemp: www.NesasHemp.com | Promo Code: MACROAugason Farms: https://augasonfarms.com/MACRO—
Alex and Stephen start 2026 with a visit from Lord Maris for the seventh Barleywine is Life episode with four Barleywhales that have been lighting up the tradeboards on the secondary market. Featuring barleybobs from The Veil (STARVE: Exhibit H), Goose Island (King Henry II), Anchorage Brewing (Penta Oaked A Deal With the Devil), and Half Acre (Bazalt Wilderness of History). In the Beer News, Jim Beam temporarily shutters a distillery, Disneyland offers a $250 adult beverage, and Bells Brewing tests the spelling skills of their fanbase for this year's Hopslam release. To get involved with the "Life" International Barleywine Collab, click the link for info about the recipe, BSG discount, and links to help raise awareness of colon cancer. If you'd like to make a direct donation to help support Alex, head over to his GoFundMe. For more info about colon cancer and to help support the fight against it check out the Colon Cancer Foundation. Head to our Patreon for weekly exclusive content. Get the Malt Couture Officially Licensed T-shirt. Follow DontDrinkBeer on Instagram and Twitter
Use promo code OLEARY on Sleeper and get 100% match up to $100! https://Sleeper.com/promo/OLEARY. Terms and conditions apply. #Sleeper Matt O'Leary discusses the New York Jets next batch of firings Play in my Free to Play Win $100 on PeopleGuess: https://www.peopleguess.com/ The ultimate Jets fan experience is here. Matt O’Leary content, every time you open a new tab. Install the free Swv All videos now available in Podcast Form: Apple
Hera is running from The Empire for the first time, with Chopper at her side! She's gonna have to get used to it. In this episode of Dark Side Divas we discuss the Star Wars - The Bad Batch episode "Rescue on Ryloth" (s1e12). Clone Force 99 gets a call for help from Hera, because Omega hooked her up with her personal cell number. Will Hunter be able to help Hera save her family from the clutches of The Empire? Listen to this episode to hear what Stef and Chris have to say. Warning: We do discuss a lot of real world politics in this episode. Stef and Chris have a lot to say, and if you are looking for an escape from the horrors of the world, you may want to skip this episode.
Welcome back to #WithSONAR! This week, we're diving into Batch Rate Intelligence, SONAR's multi-lane pricing tool designed to support short-term pricing decisions and RFP strategy with downloadable, market-aligned rate intelligence. In this session, you'll learn how to: -Access Batch Rate Intelligence within SONAR applications -Upload lanes using the downloadable template or work directly in the UI -View real-time broker-to-carrier spot rates updated daily -Compare spot and contract rates (where available) to evaluate margin and spread -Understand lane scores to gauge capacity difficulty and pricing leverage -Monitor daily rejection rates to anticipate spot rate pressure -Export full datasets for Excel-based RFP analysis and customization Batch Rate Intelligence is an add-on within SONAR and is especially valuable as we head into RFP season, helping ensure your pricing reflects real-time market conditions.
Tommy & Josh detail the latest release from Watch Hill Whiskey Company, the Exceptional Series Batch 03. watchhillwhiskeyco.com
The semifinals of the College Football Playoffs are upon us. Mike and Jim pick both games. They also go through Wild Card Weekend in the NFL. The guys answer your mailbag questions and also wonder if the Giants are still a premier job in the NFL. All of this and more on the latest episode of Cash the Ticket today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
What is "the devil" doing in the title of this Bad Batch episode? Apparently it was a big deal, but the divas appreciate the reference. In this episode of Dark Side Divas we discuss the Star Wars - The Bad Batch episode "Devil's Deal" (s1e11). We return to Ryloth were The Galactic Empire has promised to protect the citizens of the planet, and some people are okay with this after being in The Clone Wars for many years. Peace doesn't last long in a galaxy far-far away! Listen to this episode to hear what Stef and Chris have to say.
View all cards mentioned in this episodeBy popular demand, Andy and Anthony rush back into their series exploring modal mechanics through Magic's history. Continuing where they left off, they talk about mechanics from Return to Ravnica to Fate Reforged. They argue over some mechanics fit a useful definition of “modal” and what the mechanics, and individual modal cards, offer to Cube designers.Discussed in this episode:Song-Poem DocumentaryModal Mechanics 001 — Some Nonzero Number of Mechanics Are KickerModal Mechanics 002 — Some Modal Mechanics Evoke a Strong ResponseReading Rainbow Cube100 OrnithoptersThe Design of Everyday ThingsLucky Paper PatreonIf you'd like to show your support for the show, please consider backing Lucky Paper on Patreon or leaving us a review on iTunes or wherever you listen.Check us out on Twitch and YouTube for paper Cube gameplay.You can find the hosts' Cubes on Cube Cobra:Andy's “Bun Magic” CubeAnthony's “Regular” CubeYou can find both your hosts in the MTG Cube Talk Discord. Send in questions to the show at mail@luckypaper.co or our p.o. box:Lucky PaperPO Box 4855Baltimore, MD 21211Musical production by DJ James Nasty.Timestamps0:00 - Intro3:03 - Modal Mechanics: Part 0034:44 - Scavenge9:03 - Bloodrush17:17 - Fuse27:30 - Bestow36:03 - Monstrosity37:59 - Will of the Council42:27 - Sieges48:41 - Planeswalker Addendum
Links & Mentions: Consult booking link: www.dryazdancoaching.com/consult Email me: DrDYazdan@gmail.com Make more money video: www.dryazdancoaching.com/MDM Follow me for more tips: (@DrYazdan) www.instagram.com/dryazdan and (@DrYazdanCoaching) www.Instagram.com/dryazdancoaching Episode Summary: Instagram has changed — and so has your audience. In this episode, Dr. Yazdan breaks down exactly what kind of content will actually grow your account in 2026. Whether you're new to social media or you've been posting consistently without seeing results, this episode will help you reconnect with your purpose, attract the right patients, and turn your practice into a powerful personal brand. Dr. Yazdan shares what's working now — from storytelling and authenticity to short-form video strategies — and explains why the secret to Instagram growth isn't hacks or algorithms… it's connection.
Do you plan to hit your sales goals, or just hope you will? You set goals in January. By March, they are forgotten. It's because most salespeople confuse wanting something with planning for it. “I want to close more deals this year.” That is not a goal. That is a wish. “I want to be better at prospecting.” Still not a goal. Just a vague intention that leads nowhere. Real sales goals require a system. Not motivation. Not inspiration. A repeatable process that turns big numbers into daily actions you can actually execute. This four-step sales goal planning system turns annual quotas into weekly, executable actions that salespeople can control and measure. Why Most Sales Goals Fail Before February Most salespeople treat goal-setting like a New Year's resolution. They write something down, feel good about it for a week, then watch it disappear under the weight of quota pressure and full calendars. Three things kill sales goals before they have a chance: Lack of specificity. Your brain cannot attach to something vague. There is no finish line, no way to measure progress, and no emotional connection to the outcome. No breakdown. Big numbers paralyze you. Looking at an annual quota feels impossible. Your brain shuts down. You don't know where to start, so you don't start at all. Zero accountability. Goals that live only in your head are easy to abandon. There is no consequence for missing them because nobody, including you, is really tracking them. Research consistently shows that people who write down specific, challenging goals and track them perform significantly better than those who rely on vague intentions or hope. The difference between hitting your number and missing it is having a systematic approach to sales goal planning and the discipline to execute it. Step 1: Identify Your Major Milestones Big goals overwhelm you. When you stare at “close $1.5 million this year,” your brain checks out. It feels too big, too far away, and too abstract. The first step in effective sales goal planning is breaking that number into key checkpoints. These milestones tell you whether you are on track or falling behind. For a $1.5 million annual goal: Q1: $375K Q2: $375K Q3: $375K Q4: $375K Now you are not chasing $1.5 million. You are chasing $375K this quarter. Still significant, but manageable. Take it further. What does $375K mean for your pipeline? If your average deal size is $50K, you need eight closed deals per quarter. If your close rate is 25 percent, you need 32 qualified opportunities in your pipeline each quarter to close those eight deals. Suddenly, that intimidating annual number becomes a concrete monthly target of roughly 11 qualified opportunities. You cannot control whether a deal closes, but you can control how many qualified opportunities you put in your pipeline. That is the number you chase. Step 2: List Your Specific Tasks Milestones tell you where you need to be. Tasks tell you how to get there. These numbers will vary based on your market, deal size, and conversion rates. The point is forcing your goal all the way down to weekly actions you can control. This step requires brutal honesty about the activities that actually generate results in your sales process. If you need 11 qualified opportunities per month and your prospecting-to-opportunity conversion rate is 10 percent, you need 110 prospecting conversations monthly. What does that look like in weekly tasks? 30 outbound calls 15 LinkedIn connection requests with personalized messages 10 follow-up emails to lukewarm prospects 3 referral conversations Assign realistic timeframes to each task. Making 30 calls doesn't require four hours. It requires 45 minutes of focused effort. Block the time, make the calls, move on. The more specific you get, the less room there is for excuses. You either completed the tasks or you did not. You are either on pace or you are behind. If you cannot list the specific weekly tasks required to hit your goal, you do not have a sales goal. You have a hope. Step 3: Consider Obstacles and Resources Every goal has obstacles waiting to derail it. Ignoring them does not make them disappear. Identify what will try to stop you, then plan around it. The biggest time killers in sales are rarely mysterious. Meetings that don't move deals forward. Prospects who will never buy but keep you engaged. Administrative tasks that someone else should handle. Reorganizing your CRM instead of filling it with opportunities. Here is how to expose them. Track your time for one week. Write down every activity in 30-minute blocks. No editing. No judgment. Just honest data. At the end of the week, categorize everything: Income-producing activities like prospecting, discovery, and closing Income-supporting activities like proposals, follow-up, and research Waste, which is everything else Most salespeople discover they spend less than 30 percent of their time on income-producing activities. If that is you, you just found out why you are not hitting your goals. Once you know where your time actually goes, you can protect the activities that matter. Block prospecting time before meetings start. Batch administrative work. Decline meetings where your presence adds no value. Now identify resource gaps. What do you need that you don't have? Skills you need to develop. Tools that would improve your results. Support from leadership to open doors with key accounts. Find these gaps early. Discovering you lack a critical skill in November is too late. Step 4: Stay Flexible Without Lowering the Goal Sales goal planning requires flexibility in tactics, not flexibility in commitment. Markets shift. Buyers change. Your original plan may need adjustment. That does not mean the destination changes. Review your goals monthly and let the data guide you. Ask three questions: Am I on track What's working What's not working If something is working, do more of it. If something isn't working, adjust your approach. For example, your data might show inconsistent execution, poor list quality, or weak follow-up. The answer is not abandoning foundational activities like cold calling. The answer is tightening your process, improving targeting, or reinforcing outreach with disciplined follow-up. Flexibility means adjusting how you execute, not lowering the standard because the work is harder than expected. Salespeople who hit ambitious goals stay flexible in their methods and uncompromising about the outcome. Monthly reviews keep you honest. They prevent you from wasting months on ineffective activity before realizing you are off track. Execute Your Sales Goal Planning System Take one goal right now. Write it down with a specific number and a deadline. Break it into three to five milestones. List the weekly tasks required. Identify your two biggest obstacles and the resources you need to overcome them. Then execute. Review weekly. Adjust monthly. Never stop driving toward the outcome. This system works because it eliminates ambiguity. You know what needs to happen this week. Obstacles don't blindside you because you planned for them. You aren't following a broken plan for six months because you built in regular reviews. While other salespeople hope for a good year, you will be executing a plan. While they react to whatever fires pop up, you will be proactively driving toward measurable outcomes. The difference between salespeople who hit their goals and those who do not is not talent or luck. It is having a systematic process for turning big goals into daily actions and the discipline to follow through when motivation fades. Sales goals don't fail because you lack desire—they fail because the plan isn't specific enough to execute. Download the FREE Goal Planning Guide to turn your sales goals into results.
Enjoy the last installment of our retrospective dedicated to the songs discovered over the past six months that left a lasting impression. The playlist features The Five Corners Quintet; Makiko Hirabayashi; Rufus Wainwright, Pacific Jazz Orchestra; Brad Mehldau; Anthony Wilson [pictured]; and Roberto Ottaviano. Detailed playlist at https://spinitron.com/RFB/pl/21722582/Mondo-Jazz [from "Lighthouse" onward]. Happy listening!
After braving the Black Friday crowds outside Binny's and pissing in bottles in now temperate weather, Alex and Stephen have sourced a horizontal of all six of Goose Island's Bourbon Country Brand Stout releases from this year. They'll be goosin' it, they'll be loosin', and they'll be Power Rankin' it to see how this year's Chicago Juice stacks up. Nothing like 14% ABV beers in the AM to kick off the new year. In the Beer News, the Canadian Brewing Awards integrate AI into their voting system and the judges take a hard stance against the clankers, Bud Light brews beer with melted snow from the Buffalo Bills stadium. Thanks to Uinta Brewing Company for sponsoring this episode. Utah's favorite in '93 has released Cutthroat Zero, their non-alcoholic Pale Ale which is on shelves now! Follow Uinta Brewing Company on Instagram @UintaBrewing. To get involved with the "Life" International Barleywine Collab, click the link for info about the recipe, BSG discount, and links to help raise awareness of colon cancer. If you'd like to make a direct donation to help support Alex, head over to his GoFundMe. For more info about colon cancer and to help support the fight against it check out the Colon Cancer Foundation. Head to our Patreon for weekly exclusive content. Get the Malt Couture Officially Licensed T-shirt. Follow DontDrinkBeer on Instagram and Twitter
What are the top sports stories of 2025? Steve Thomson and producer Jonathan Lowe pour through the list that CBS Sports provided.
Welcome back to Bri Books — the podcast that educates, encourages, and inspires by exploring ideas both on and off the page. Today's episode is about winter lixfestyle favorites: the soft hobbies, rituals, and everyday comforts that carried me through 2025 and that I'm intentionally bringing with me into 2026. You've heard a lot about the "soft life" and the "soft girl era." I want to offer a reframing: your grandmother may be the softest woman you know. Softness isn't new. It's inherited. It's practiced. It's slow. If you're new to the show, leave a review of Bri Books on Apple Podcasts, and listen to Bri Books on Apple Podcasts and Spotify.Please tell me where you're traveling to by using #bribooks on Instagram and subscribe to the Bri Books newsletter at bribookspod.com/newsletter. This episode isn't about hustle or optimization. It's about winter evenings, quiet joy, and choosing process over productivity. Last winter, I noticed myself reaching less for outcomes and more for ways of being — warmth, texture, ritual, and time that felt expansive rather than efficient. These are the lifestyle favorites that came out of that season and are staying with me. 1. Embroidery Embroidery is the ultimate soft hobby. It's tactile, forgiving, and slow in the best way. You can pick it up for ten minutes or lose an entire evening to it. Best of all, you always have something to show for your time: a few stitches, a pattern emerging, a garment mended. It requires no screens, very little space, and pairs beautifully with audiobooks, podcasts, or quiet TV. On dark winter nights, embroidery feels deeply grounding. 2. Popcorn From the Cob This was a surprise favorite of 2025. Popping kernels directly off a dried corn cob feels old-fashioned and ceremonial. It turns a snack into an event. Pop it on the stove, finish with butter and flaky salt, and eat while reading or watching snow fall. It's nostalgic, humbling, and cozy: and it happens fast enough that it asks for your full attention. 3. Candle Making & Light as Ritual I've been making candles for years, but winter 2025 made it a true ritual. Choosing the scent, wax, and vessel is an act of intention. I make candles in batches early in the season and burn them slowly throughout winter so my home smells familiar and grounding. In long, dark months, light matters. So start making your candles. 4. Gardening (Even in Winter) Gardening doesn't stop in winter; it changes form. Winter gardening looks like planning, seed sorting, journaling, and tending indoor plants. It's a reminder that growth doesn't always look active. Winter is when I reflect on what I want to grow — literally and metaphorically — in the year ahead. 5. A New Duvet from Culver One of my most meaningful upgrades of 2025 was investing in better sleep. A Cultiver linen duvet changed how winter nights felt. Linen regulates temperature beautifully, feels lived-in, and makes your bed feel like a destination. When nights are long, rest should feel intentional. 6. A Beautiful Cup from Jinen This may sound small, but it isn't. A really good cup changes how you experience mornings. Texture matters. Weight matters. A ceramic or natural-finish cup slows you down and makes tea or coffee feel ceremonial. Winter mornings deserve softness. This cup from Jinen porcelain Hasami cup has become my absolute favorite porcelain cup for everyday use. 7. Instant Pot (and Instant Pot Culture) In 2025, I leaned into comfort cooking: soups, stews, beans, and broths. The Instant Pot makes nourishment accessible without urgency. Batch cooking on Sundays meant weekday dinners felt cared for instead of chaotic. 8. Farmers Markets (Even in Winter) Winter farmers markets are quieter, more intentional, and deeply communal. Root vegetables, bread, eggs, preserves. Shopping local in winter feels like an act of care — a reminder that provision exists in every season, just in different forms. 9. Painting Painting returned to my life without pressure to be good. Winter painting is about mood, texture, and emotion — not outcome. Paint in low light. Let it be messy. Let it exist just for you. 10. New Boots & a New Coat A good pair of winter boots grounds you — literally. Practical, wearable winter clothing makes cold weather feel intentional instead of inconvenient. Winter style should support your life, not complicate it. These favorites aren't about consumption. They're about attention. Soft hobbies teach us to stay. Winter rituals remind us we're allowed to move slowly. As we head into 2026, I'm choosing warmth, intention, and creativity — and leaving urgency behind. If you're new to the show, leave a review of Bri Books on Apple Podcasts, and listen to Bri Books on Apple Podcasts and Spotify.Please tell me where you're traveling to by using #bribooks on Instagram and subscribe to the Bri Books newsletter at bribookspod.com/newsletter.
As we come to the end of another year, it feels like a good moment to pause—not to summarize everything, but to listen back. This episode is built around songs that lingered, resurfaced, and quietly insisted on being remembered. The playlist features Jérémie Lucchese; Chaerin Im [pictured]; Emiliano D'Auria; Ron Blake; James Brandon Lewis; Jakob Bro, Joe Lovano, Larry Grenadier, Thomas Morgan, Anders Christensen, Joey Baron, and Jorge Rossy. Detailed playlist at https://spinitron.com/RFB/pl/21722582/Mondo-Jazz [from "L'Ogre" to "Mumbo Jumbo"]. Happy listening!
The guys have been red hot so why not present another teaser? Download the latest episode of Cash the Ticket today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
The College Football Playoff games are upon us, Valenti and Costa break down all the CFP games as well as the other "big" bowl games on this episode of Cash the Ticket. Download and subscribe today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Hey friend. I'm so glad you're here with me today. If you're listening to this, chances are life has been feeling a little… full. Not bad. Not wrong. Just full. Between work, kids, school, meals, and all the little decisions you make from the second your feet hit the floor, it's no wonder your heart is craving rest. I'm just sitting here in our -20 degree weather batching some episodes as our next few weeks we are going to be enjoying all the family and Grandkids. Good day to stay inside! Today I want to talk about something that has changed my life and has changed the lives of thousands of busy moms I've walked with…Simplicity. Not the Pinterest kind. Not the everything-is-perfect kind. But the kind that actually makes your life easier. The kind that brings peace back into your home. The kind that reconnects you with the moments that matter most. Most moms don't realize this, but complexity steals our joy one tiny decision at a time. Every extra step. Every scattered routine. Every night you walk into the kitchen wondering what on earth to cook, it all chips away at your energy. And the truth is your exhaustion isn't a character flaw.It's not because you should be doing better. You're tired because your life is overcrowded. Simplicity pulls back the layers so you can breathe again. And Scripture tells us that God is not a God of confusion, but of peace. When your life feels chaotic, it doesn't mean you've failed, it means something needs to be simplified. When moms hear the word simplify, they think they need to add something like more organization, more color-coded systems, more complicated meal plans. But true simplicity is about removing what drains you. It's choosing fewer decisions instead of more. It's creating small repeatable rhythms so your brain doesn't have to work so hard. It's letting meals become easier by cooking once and eating all week. It's allowing grocery day to be simple by keeping the same staples on hand. It's trusting that less really is enough. When you take away the noise, you uncover a life that was already waiting for you - slower, calmer, more intentional. Simplicity doesn't just change your to-do list. It changes you. Your evenings feel peaceful instead of rushed. • Your kids get a more present, less frazzled version of you. • You start saving money because you're not grabbing takeout out of panic. • You stop living in survival mode and start living on purpose. • You begin to feel like you have your life back, even while working full-time. So many moms tell me, I didn't know it could feel this easy. But it can. And it starts with small shifts, often one simple system at a time. I didn't start out living simply, I learned it because I had to. As a nurse, I worked long shifts. I came home exhausted, hungry, and feeling like I had nothing left to give. I didn't want fast food to raise my kids, and I didn't want stress to define our home. So I started building small systems. Batch cooking. Simple routines. Staples I always kept on hand. Healthy meals that took minutes, not hours and an automated online business. And friend… that changed everything. It gave me margin. It gave me peace. It gave me time with my family that I didn't have before. And that's why I teach it now because I know what it feels like to carry more than you were ever meant to hold. If you're listening and thinking, I want this… but I don't know where to start, hear this: You don't have to overhaul your life. You don't need a planner full of color-coded tabs. You don't need a new personality. You just need one simple step. One small shift that lightens your load. And this January, I'm leading a FREE No Spend Challenge that will help you reset your budget, simplify your daily decisions, and start the year with intention, not stress & overwhelm.It's designed for the busy mom who feels like she's drowning in expenses, rushing through evenings, and wishing life could slow down. And it's completely free. Link for NO SPEND CHALLENGE -> https://stan.store/ClaimingSimplicity You can join at the link below, and each week I'll walk with you through small, simple changes that make a big difference in your spending. Friend, simplicity isn't about doing less so you can be more productive. Simplicity is about making room for what truly matters. It's about creating space for joy, presence, peace, and purpose. You deserve a life that feels lighter. You deserve systems that support you. And you deserve to live in a home where stress doesn't get the final say. I'm cheering for you, and I'm grateful we get to walk this journey together. Happy New Year Friend! Monica
On this week's episode of The Whiskey Trip, Big Chief pulls up a chair with his brother Little Feather for a no-nonsense, end-of-year sit-down rooted in family, flavor, and hard-earned opinions. With a year of miles behind them and a table full of whiskey in front of them, the brothers look back on Big Chief's 2025 Whiskey Trail while taking on the challenge of naming Whiskey of the Year and Distillery of the Year. It is part reflection, part debate, filled with laughs, brotherly jabs, and a deep respect for the craft and the people behind it. To strip away hype, labels, and reputation, the tasting is done completely blind, with all five whiskeys poured into plain mason jars. No fancy glassware and no branding, just whiskey. Each pour is judged on nose, palate, mouthfeel, finish, and balance, forcing every whiskey to stand on its own merits. The mason jars set an honest, humble tone and lead to a few surprises and some strong, unfiltered reactions. As the tasting unfolds, Little Feather flips the mic and interviews Big Chief about the highs, lessons, and standout moments from the 2025 Whiskey Trip. Between sips, they talk road miles, distillery visits, and the people who made the journey matter. The conversation naturally turns forward, with Big Chief sharing what is already taking shape for 2026, including new regions, deeper dives into the craft, and a continued commitment to giving distillers a real voice. The five whiskeys competing for Whiskey of the Year represent a wide range of styles from stops along the trail. The lineup includes Anita's Choice, a six-grain bourbon from Burnt Church Distillery; Reverence from 1845 Distilling Company; a bold Cask Strength Rye from Ponfeigh Distillery; Broken Halo Cask Strength from War Trail Distillery; and Batch 37 from Barrell Craft Spirits. With no labels to lean on, each whiskey earns praise or criticism based solely on what is in the jar. When the dust settles, Whiskey of the Year goes to 1845 Preemption Reverence, a pour that rose above the rest when judged blind for its balance, depth, and character. Distillery of the Year honors go to Trinity River Distillery, recognized not only for its whiskey but for the full experience it delivers, from immersive tours to the renovation of its historic ranch-style bean factory, along with its Whiskey Kitchen and Bourbon Nursery. Whether listeners agree with the final calls or not, this episode is about honoring the pours, the places, and the passion that keep American whiskey moving forward. Pour yourself something good, pull up a chair, and Take the Ride with Big Chief and Little Feather as they close out 2025 and set their sights on what is ahead.
Patreon: https://patreon.com/dadbatchpod email: dadbatchpod@gmail.com Subscribe to The Dad Batch on YouTube Get The Dad Batch merch: https://shop.thedadbatch.com Social media: instagram.com/dadbatchpod Follow the hosts on social media: instagram.com/stevie.kickz instagram.com/alphaignition instagram.com/sithing.aint.easy Instagram.com/tech.badbatch instagram.com/pabufrik instagram.com/leftcoastavenger
Contrary to his previous comments, the latest release of Epstein documents shows President Trump flew on Epstein's jet at least 8 times. Plus, while new data shows a booming a economy, consumer confidence is at its lowest level since the April tariff rollout. And, The White House escalates pressure on Venezuela with more boat strikes and the threat of military action. Glenn Thrush, Mychael Schnell, David Drucker, Ron Insana, Justin Wolfers, Lt. Gen. Mark Hertling, and Brandon Ambrosino join The 11th Hour this Tuesday night. To listen to this show and other MS podcasts without ads, sign up for MS NOW Premium on Apple Podcasts. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The Board Brought To You By FanDuel America's #1 Sportsbook, Make Every Moment More - Bowl Batch 2.0. Download the latest episode of Cash the Ticket today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
After a scorching hot bowl batch 1.0, Jim and Mike look to continue their incendiary start with a fresh new batch. The guys also get into the death of the USC/Notre Dame rivalry, Costa going abroad for a bowl game and so much more. Download and subscribe to Cash the Ticket today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
The Epstein files keep coming out, and instead of clarity, they are producing something far messier: suspicion without resolution and outrage without proof.What we are seeing now is not the mythical document many people imagine, a clean list pairing powerful men with specific criminal acts. That list does not exist. What exists are FBI files and grand jury materials filled with allegations, some credible, some vague, many never fully investigated. The result is a widening cloud of suspicion over a long list of names, with no clear answers about who did what or why prosecutors failed to act when they had the chance.Politics Politics Politics is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.That ambiguity is why this release satisfies no one. New documents, like the bizarre and possibly fake letter to Larry Nassar attributed to Epstein after his death, only deepen confusion rather than resolve it. If the Trump administration delayed releasing these files out of fear of what they contained, that decision backfired badly. The slow drip has turned the Epstein case into a permanent Rorschach test, where everyone sees what they already believe. Until the Justice Department explains what it has, what it does not, and why accountability failed for so long, the Epstein story will remain unresolved and corrosive.Chapters00:00 - Intro01:39 - Epstein05:16 - Tevi Troy on Lame Duck Presidents49:41 - Wrap-up This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.politicspoliticspolitics.com/subscribe
Trump Mentioned in ANOTHER Big Batch of Epstein Files, Flew on ‘Lolita Express,' PLUS…Supreme Court Denies Emergency Request by Trump to Deploy National Guard to Chicago
Clone Force 99 is broke, desperate, and backed into a corner, which means helping someone they really don't want to. This week on Dark Side Divas, we break down Star Wars: The Bad Batch Season 1, Episode 10, “Common Ground.” The squad heads to Raxxus to rescue a former enemy, politics get messy, and Omega starts learning that survival sometimes means knowing how to make a deal… especially with Cid.Join Stef and Chris as we dive into uneasy alliances, shifting morals, and what this episode reveals about the Batch's future.
After months of political wrangling, parts of the long-awaited Epstein files have been released by the US Justice Department. The trove consists of thousands of documents related to the late sex-offender. Pictures include the former US President Bill Clinton, Andrew Mountbatten-Windsor - Britain's former prince, musicians Mick Jagger and Michael Jackson. Being named or pictured in the files is not an indication of wrongdoing. The justice department did not release all existing files, and the published ones were heavily redacted, prompting frustrated reactions from survivors of Epstein's abuse.Also: the US carries out dozens of strikes against the Islamic State group in Syria. Anti-government youth protesters in South Korea are taking cues from the American right's MAGA movement. Italy announces a fee for tourists to visit the Trevi Fountain in Rome. Putin vows revenge on Ukraine after an oil tanker was blown up in the Mediterranean Sea. Palestinians tell the BBC they were sexually abused in Israeli prisons. And how a lost radio play by Tennessee Williams was found more than four decades after his death, and has now been heard for the first time.The Global News Podcast brings you the breaking news you need to hear, as it happens. Listen for the latest headlines and current affairs from around the world. Politics, economics, climate, business, technology, health – we cover it all with expert analysis and insight. Get the news that matters, delivered twice a day on weekdays and daily at weekends, plus special bonus episodes reacting to urgent breaking stories. Follow or subscribe now and never miss a moment. Get in touch: globalpodcast@bbc.co.uk
P.M. Edition for Dec. 19. The Justice Department releases the first batch of files tied to its investigation of sex offender Jeffrey Epstein. U.S. home sales rise to their highest level since February. And WSJ's Kelly Crow explains how the art market is adapting younger buyers. Sabrina Siddiqui hosts. Sign up for the WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
How do three college friends go from a Greek food cart to nearly $50 million in revenue selling hemp-derived THC products—all while being blocked from traditional advertising channels? In this episode, I sit down with Andy Gould, co-founder of Batch, to unpack one of the most explosive growth stories in the history of this podcast. Listen in as Andy breaks down how Batch scaled in a restricted category using Meta ads, bulk content creation, and a powerful creative flywheel. We also dig into bootstrapping vs. outside funding, co-founder dynamics, and what the sudden federal hemp prohibition means for the future of Andy's $50M business. You can find show notes and more information by clicking here: https://bit.ly/3MnRQ5g Interested in our Private Community for 7-Figure Store Owners? Learn more here. Want to hear about new episodes and eCommerce news round-ups? Subscribe via email.
Call us The Malty Boyz™. Some years ago--never mind how long precisely--having little or no malty bucks in our purse, and nothing particular to interest us on the shelves at Binny's, we thought we would sail about a little and see the watery part of the world. Now we've slayed many a white whale but it's been a minute since we've searched for a modern day whale. Which is why in this episode, Alex reels in some Schramm's, Hill Farmstead, Moksa, and Perennial Artisan Ales to see what's still worth paying triple digits over in the beer and mead game. In the Beer News, the thrilling conclusion of the Fair State saga, Canada gets its own gas station whale, and beer brewed for an arctic expedition looks to be brought back to life. To get involved with the "Life" International Barleywine Collab, click the link for info about the recipe, BSG discount, and links to help raise awareness of colon cancer. If you'd like to make a direct donation to help support Alex, head over to his GoFundMe. For more info about colon cancer and to help support the fight against it check out the Colon Cancer Foundation. Head to our Patreon for weekly exclusive content. Get the Malt Couture Officially Licensed T-shirt. Follow DontDrinkBeer on Instagram and Twitter
Valenti and Costa are back with the first version of their bowl extravaganza. The guys tackle the first round of the College Football Playoff as well as others that will be front and center when the CFP isn't being played. The guys also get into Valenti's Heisman vote, Michigan's coaching search and more. Download and subscribe to the latest episode of Cash the Ticket today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices