POPULARITY
Categories
Brandon Hogstad — a scientist, musician, big thinker, and co-host of a dream interpretation podcast — talks about how ADHD showed up in his adult academic life. As challenges emerged, finishing projects became a persistent struggle. A high school valedictorian, Brandon entered college with confidence and a strong academic track record. College didn't derail him. But it brought him down to earth. For the first time, he realized he'd never really learned how to study — and that raw intelligence only goes so far. The experience reshaped his ego and deepened his understanding of his ADHD brain. Brandon reflects on working with, not against, his ADHD. And the conversation turns when, right on the spot, he interprets a dream that host Laura Key shares. For more on this topic: Read: ADHD and the brain Watch: ADHD and: Overachieving Listen: Brandon's “Let's Talk About Dreams” podcast For a transcript and more resources, visit ADHD Aha! on Understood.org. You can also email us at adhdaha@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Today is day 33 and we are in the section Concerning Holy Scripture on question 33. 33. How should Holy Scripture be understood? Because Holy Scripture was given by God to the Church, it should always be understood in ways that are faithful to its own plain meaning, to its entire teaching, and to the Church's historic interpretation. It should be translated, read, taught, and obeyed accordingly. (Nehemiah 8:1–8; Psalm 94:8–15; Acts 8:26–35; 18:24–28; Jerusalem Declaration, Article 2; Articles of Religion, 20) Today we will pray the Nun Stanza of Psalm 119 (verses 105-112) which is on page 435 of the Book of Common Prayer (2019). If you would like to buy or download To Be a Christian, head to anglicanchurch.net/catechism. Produced by Holy Trinity Anglican Church in Madison, MS. Original music from Matthew Clark. Daily collects and Psalms are taken from Book of Common Prayer (2019), created by the Anglican Church in North America and published by the Anglican Liturgical Press. Used by permission. All rights reserved. Scripture quotations are from The ESV® Bible (The Holy Bible, English Standard Version®), copyright © 2001 by Crossway, a publishing ministry of Good News Publishers. Used by permission. All rights reserved. Catechism readings are taken from To Be a Christian - An Anglican Catechism Approved Edition, copyright © 2020 by The Anglican Church in North America by Crossway a publishing ministry of Good News Publishers. Used by permission. All rights reserved.
Experienced specialist gay therapist Ken Howard, LCSW, CST, examines neurodivergence, autism spectrum, dating, and intimacy in gay men, offering practical guidance for neurodivergent men and their partners.
Ever catch yourself spiraling over a decision and feeling like your brain won't stop replaying every possible “what if”? Dr. J is joining us to talk about rumination, overthinking, and getting caught in a mood spiral as a woman with ADHD. We're breaking down why we get stuck and practical ways to interrupt those thought loops. For more on this topic: Try: ADHD Unstuck (a free self-guided activity)Listen: How to climb out of mental rabbit holes (from Hyperfocus)Read: ADHD and mood swingsFor a transcript and more resources, visit Sorry, I Missed This on Understood.org. You can also email us at sorryimissedthis@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Trying to conceive can be incredibly isolating, especially when your friends and family just do not get it. That is why I loved this conversation with Sarah Banks, fertility coach, speaker, author, and creator of the Positivity Planner.Sarah's work is all about helping you feel more emotionally supported through treatment. She also has years of experience working with clinics on patient support strategies, so she sees both sides: what patients need, and what is still missing in the system.We talked about why fertility coaching can be such a powerful complement to medical treatment, how to navigate the emotional rollercoaster of TTC, and how to protect your mental wellbeing, even when things do not go as planned.What we discuss in this episode:The power of coaching and how it can support people emotionally through treatmentHow Sarah's own journey inspired her work in fertility and patient experienceWhat fertility clinics are doing (and not doing) to better support patientsCoping with anxiety, stress, and overwhelm while TTCStrategies for building resilience and staying hopefulTips for advocating for yourself in appointments and with providersThe Positivity Planner and how journaling can support your mental wellbeingThe importance of community, connection, and being heardWhat Sarah wishes everyone struggling with infertility knewIf you are feeling like you have no one to talk to, or like you are supposed to just keep going while your heart is breaking, this one is for you.This episode is sponsored by Access FertilityWorried about the financial pressure of treatment? Access Fertility offers funding programmes and 0% interest finance to help ease the burden of self-funding IVF.Their services include:Loans of up to £12,000 with no interest over 12 monthsMulti-cycle packages that can save you up to 30%Refund programmes offering up to 100% back if treatment is unsuccessfulPartnerships with over 60 top clinics in the UKPersonalised advice based on your age and treatment planVisit accessfertility.com/thefp to learn more.Learn more about Sarah's work:Positivity Plannerssarahbanks.coachLet's keep the conversation going:
Send Us a Message (include your contact info if you'd like a reply)Fair can feel like justice. In divorce, it often becomes a trap. In this episode, we sat down with attorney and managing partner Sara Marler to explore why “I just want what's fair” derails strategy, inflates costs, and delays peace—and how a trauma-informed, whole-person approach helps clients pivot toward outcomes they can actually live with. Sara opens the curtain on what courts really weigh under standards like “just and equitable,” why judges prioritize clarity over grievance, and how stories of betrayal still matter when they are used to guide better decisions rather than to fuel a courtroom campaign.Together, we map the gap between emotional fairness and legal reality, then show how to close it with reframing, education, and the right team. You'll hear how validating a client's experience builds trust, how divorce coaches reduce legal fees by handling the emotional heavy lifting, and why amicable and collaborative professionals consistently deliver faster, more sustainable results than “shark” tactics. We also talk practical tools—mindfulness, targeted parenting classes, curated resources—that help parents stop scorekeeping and design plans centered on children's needs, not adult ego.If you're navigating separation or advising clients through it, this conversation offers a clear path from conflict to closure: focus on what you can control, choose resolution over vindication, and measure success by stability, not revenge. Divorce splits one household into two; it won't look the same, and that's okay. The goal is a livable outcome that protects your kids, your wallet, and your future self. Subscribe, share this episode and leave us a review to help others find us. To learn more about Sara visit her practice website at: https://marlerlawpartners.com/ Learn more about DCA® or any of the classes or events mentioned in this episode at the links below:Website: www.divorcecoachesacademy.comInstagram: @divorcecoachesacademyLinkedIn: divorce-coaches-academyEmail: DCA@divorcecoachesacademy.com
Dylan LeClair, Head of Bitcoin Strategy at Metaplanet, joins David Sencil at the Bitcoin MENA Conference to explain how a new generation came to Bitcoin before traditional economic theory — and how that perspective now informs one of the most aggressive corporate Bitcoin strategies in the world.This interview covers Metaplanet's Bitcoin treasury model, using Japanese capital markets to acquire Bitcoin, volatility and options strategies, institutional adoption, why the four-year Bitcoin cycle may never have existed, and why macro liquidity now drives Bitcoin markets.00:00 Introduction to Bitcoin and Personal Journey03:01 Role and Responsibilities at MetaPlanet05:49 Investment Strategies and Market Dynamics08:55 Custody and Capital Raising Strategies11:56 Market Navigation and MNAV Insights15:09 Institutional Interest and Regulatory Landscape17:48 Bitcoin's Future and Privacy Considerations
Sorry, I Missed This: The Everything Guide to ADHD and Relationships with Cate Osborn
Ever catch yourself spiraling over a decision and feeling like your brain won't stop replaying every possible “what if”? Dr. J is joining us to talk about rumination, overthinking, and getting caught in a mood spiral as a woman with ADHD. We're breaking down why we get stuck and practical ways to interrupt those thought loops. For more on this topic: Try: ADHD Unstuck (a free self-guided activity)Listen: How to climb out of mental rabbit holes (from Hyperfocus)Read: ADHD and mood swingsFor a transcript and more resources, visit Sorry, I Missed This on Understood.org. You can also email us at sorryimissedthis@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Routines aren't about perfection. They're about keeping the peace and developing a sense of stability. In this episode, Dr. J explains why traditional routines can feel impossible for ADHD brains — and what actually works. Think tiny, doable habits. Attaching new routines to things you already do. And yes, leaving room for rest, fun, and even the occasional “I forgot my socks” day.For more on this topic: Listen: ADHD and time perceptionRead: One woman's daily routine with ADHDRead: How to build habits with ADHDWatch: Jessica McCabe on sticking to habits and routinesFor a transcript and more resources, visit MissUnderstood on Understood.org. You can also email us at podcast@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What Fresh Hell: Laughing in the Face of Motherhood | Parenting Tips From Funny Moms
What do we do as parents when our kids aren't great at making friends, or their friends are outgrowing them, or we feel that their friends are a bad influence? Sometimes, we're not supposed to do anything at all. Sometimes our kids really need our support. How can we tell the difference? In this episode, Amy and Margaret discuss: what might contribute to trouble making friends the skills kids can develop to become better friends what to do when you don't like your kid's friends This episode was originally released on November 6, 2024. Here are links to some of the resources mentioned in the episode: Michelle Icard for CNN: Parents ‘should be seen and not heard' when it comes to kids and their friendships Parenting.org: My Child Has No Friends Julia Morrill for Health Matters: How Parents Can Help Their Kids Make Friends Lexi Walters Wright for Understood.org: 4 skills for making friends Claire McCarthy for Harvard Health Publishing: Helping children make friends: What parents can do Kelsey Borresen for HuffPost: What To Do If You Don't Like Your Kid's Friend What Fresh Hell is co-hosted by Margaret Ables and Amy Wilson. We love the sponsors that make this show possible! You can always find all the special deals and codes for all our current sponsors on our website: https://www.whatfreshhellpodcast.com/p/promo-codes/ mom friends, funny moms, parenting advice, parenting experts, parenting tips, mothers, families, parenting skills, parenting strategies, parenting styles, busy moms, self-help for moms, manage kid's behavior, teenager, tween, child development, family activities, family fun, parent child relationship, decluttering, kid-friendly, invisible workload, default parent, rejection, kid rejection, friendships, kids friendships, kids friends, kids making friends, kids social skills Learn more about your ad choices. Visit podcastchoices.com/adchoices
Research biologist Nathaniel Jeanson, author of "Traced: Human DNA's Big Surprise," talks about how study of DNA had deeply changed our understanding of human history. Apologist Abdu Murray, author of "Fake ID," talks about how identity ideologies of our day combined with AI is destroying the acceptance of reality by many people. How do you keep yourself rooted in truth? The Reconnect with Carmen and all Faith Radio podcasts are made possible by your support. Give now: Click here
U.S. kids are more depressed, stressed, and anxious than ever. ADHD and autism diagnosis rates are steadily rising. What's going on? In this episode of Hyperfocus, journalist Jia Lynn Yang joins Rae to examine how major school policy shifts in the U.S. have changed what's expected of kids, often with unintended — and serious — consequences. Drawing from her New York Times reporting and her personal experience as a parent, Jia Lynn explores whether school itself may be contributing to the crisis — and what kids actually need to thrive.For more on this topic:Read: Jia Lynn's piece: America's children are unwell. Are schools part of the problem?Read: CDC youth mental health snapshotRead: The evolution of common core standardsFor a transcript and more resources, visit Hyperfocus on Understood.org. You can also email us at hyperfocus@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Hello to you listening in Bethesda, Maryland!Coming to you from Whidbey Island, Washington this is Stories With Women Who Walk with 60 Seconds for Wednesdays on Whidbey and your host, Diane Wyzga. When I launched my communication consulting practice, Quarter Moon Story Arts, I established a uniquely forward-looking, story-based business founded on the power of story to profoundly and positively shift our awareness, our behavior, even our culture. Like magic, the sorcery of stories is this: they help each of us to be seen and heard, to understand and be understood.As the eldest of 7 children, an incest survivor, nurse, attorney, litigation consultant, and professional storyteller I had to teach myself again and again how to be seen, heard, understood, and listened to. How did I do that? I learned to tell my personal and professional stories in my own words with my own values in my own way. Always it was a now-or- never chance to become a stubbornly courageous speaker willing to give life to my authentic voice. My mission is language. Language is power. Your stories, visions, ideas, and messages are powerful; but only if they are brought to life. What if you could tell the story that advances your business, creates clarity in life choices, persuades your clients, or produces effective results from your ideas?CTA: If you have a desire to say what you mean and mean what you say, come as you are and change inside Quarter Moon Story Arts.Book a Discovery Call and get your story going => Email me => Info@quartemoonstoryarts.netYou're always welcome: "Come for the stories - Stay for the magic!" Speaking of magic, I hope you'll subscribe, share a 5-star rating and nice review on your social media or podcast channel of choice, bring your friends and rellies, and join us! You will have wonderful company as we continue to walk our lives together. Be sure to stop by my Quarter Moon Story Arts website, check out the Communication Services, email me to arrange a no-obligation Discovery Call, and stay current with me as "Wyzga on Words" on Substack.Stories From Women Who Walk Production TeamPodcaster: Diane F Wyzga & Quarter Moon Story ArtsMusic: Mer's Waltz from Crossing the Waters by Steve Schuch & Night Heron MusicALL content and image © 2019 to Present Quarter Moon Story Arts. All rights reserved. If you found this podcast episode helpful, please consider sharing and attributing it to Diane Wyzga of Stories From Women Who Walk podcast with a link back to the original source. Thank you!
Psychotherapist, author, and ADHD pioneer Terry Matlen shares what led to her ADHD diagnosis. Terry's path started with years of shame and the feeling that everyday life was inexplicably harder than it should be. She describes getting overwhelmed by ordinary moments: making dinner, figuring out what to wear, and freezing at the sink with a wooden spoon in her hand.Terry is an expert on ADHD in women. She talks about mood regulation and self-esteem with empathy. And she offers hard-won guidance for women with ADHD, especially moms. The conversation is honest — and likely to feel familiar to anyone who's ever felt like everyday life is too much to handle.For more on this topicListen: She broke the silence on ADHD shame in women (Sari Solden's story) Listen: She wrote the book on women, shame, and ADHDRead: ADHD and mood swingsFor a transcript and more resources, visit ADHD Aha! on Understood.org. You can also email us at adhdaha@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Some people can't or won't be willing to understand you. Don't waste time trying.Visit our website and join the weekly newsletter and attend no-cost events.→ Unmasked Conversations - 2nd + 4th Saturdays→ Structured Social Hour - Last Sundays→ CEOS Business Networking Hour - First ThursdaysRegister here: www.patternsofpossibility.com/events#audhd #rejection #friendships
What Fresh Hell: Laughing in the Face of Motherhood | Parenting Tips From Funny Moms
When we have a kid who just doesn't seem to fit in—or who is a loner, if a fairly content one—it can be hard for parents. But putting our own anxiety about it aside, and getting clear on the lagging skills and social cues that may not quite be in place, is the best way to help kids get on a better path. This episode is full of specific and useful advice! Amy and Margaret discuss: all the reasons kids can have trouble making (and keeping) friends five "unwritten social rules" that some kids take longer to comprehend how figuring out the specific issues at play can lead to the most useful solutions This episode was originally released on May 29, 2024. Here are links to some of the resources mentioned in the episode: Jamie Howard, et. al for Child Mind Institute: Kids Who Need a Little Help to Make Friends The Sue Larkey podcast: Promoting Social Understanding – Social Scripts Gwen Dewar for Parenting Science: How to help kids make friends: 12 evidence-based tips Christine Comizio for U.S. News Health: Understanding Kids' Friendship Struggles: Common Causes and Solutions Lexi Walters Wright for Understood.org: 5 “unwritten” social rules that some kids miss Andrew M.I. Lee for Understood.org: Why some kids have trouble making friends ADHD Dude: "How to Help Your ADHD Child Keep Friends" What Fresh Hell is co-hosted by Margaret Ables and Amy Wilson. We love the sponsors that make this show possible! You can always find all the special deals and codes for all our current sponsors on our website: https://www.whatfreshhellpodcast.com/p/promo-codes/ mom friends, funny moms, parenting advice, parenting experts, parenting tips, mothers, families, parenting skills, parenting strategies, parenting styles, busy moms, self-help for moms, manage kid's behavior, teenager, tween, child development, family activities, family fun, parent child relationship, decluttering, kid-friendly, invisible workload, default parent, friendships, making friends Learn more about your ad choices. Visit podcastchoices.com/adchoices
Annalisa answers followers questions. I would love to hear your thoughts on this episode. Support the show
When ADHD overwhelm hits, it's usually not because of one big event. It's the work project and your kid's school play and the relationship thing and everyone is out of clean socks... and now you're caught in a spiral of OMG. Today, Cate and our fabulous producer, Jessamine, dig into Reddit stories about work screw-ups, panic lying, and how pattern recognition can quietly turn everyday moments into emotional flashpoints in relationships. What actually can stop that spiral? For more on this topic: Listen: ADHD and workplace stressListen: Managing expectations in relationships (feat. KC Davis)Read: How to Keep House While Drowning, by KC DavisRead: Fair Play, by Eve RodskyFor a transcript and more resources, visit Sorry, I Missed This on Understood.org. You can also email us at sorryimissedthis@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Tired of ADHD strategies that don't work? Here's what actually does. FREE training here: https://programs.tracyotsuka.com/signup_____Wanting to be understood is completely normal. Especially for ADHD women. But there's a moment where that need quietly shifts, and suddenly we're not trying to connect anymore. We're trying to survive.In this episode, let's talk about why feeling misunderstood doesn't just feel uncomfortable. It can feel unsafe. When that happens, the nervous system takes over. The brain speeds up. We explain more. We repeat ourselves. Not because we're trying to win an argument, but because our body is trying to prevent rejection. We explore how rejection sensitive dysphoria, a reactive amygdala, and years of being misread wire ADHD brains to overexplain as a form of self protection.Let's unpack why overexplaining is not a communication problem, it's a nervous system response. We explore rejection sensitive dysphoria, the empathy gap, and why saying more often creates more distance, not more understanding. We also talk about the shift that changes everything: moving from chasing understanding to choosing safety, and how to protect your energy without shrinking, defending, or disappearing.Resources: Website: tracyotsuka.comInstagram: https://instagram.com/tracyotsuka YouTube: https://www.youtube.com/@tracyotsuka4796FREE 3-days to Fall in Love With Your ADHD Brain training on Jan 6th: https://tracyotsuka.com/ilovemybrain Tired of ADHD strategies that don't work? Here's what actually does. FREE training here: https://programs.tracyotsuka.com Send a Message: Your Name | Email | Message If this podcast helps you understand your ADHD brain, Shift helps you train it. Practice mindset work in just 10 minutes a day. Learn more at tracyotsuka.com/shift Instead of Struggling to figure out what to do next? ADHD isn't a productivity problem. It's an identity problem. That's why most strategies don't stick—they weren't designed for how your brain actually works. Your ADHD Brain is A-OK Academy is different. It's a patented, science-backed coaching program that helps you stop fighting your brain and start building a life that fits.
On Being Understood is the first episode of The GABA Podcast — a quiet reflection on language, meaning, and listening.Quietly supported by Agent Page, a project concerned with how people and organisations are interpreted by machines.
You're not here to convince the world you make sense. You're only abandoning yourself trying.Visit our website and join the weekly newsletter and attend no-cost events.→ Unmasked Conversations - 2nd + 4th Saturdays→ Structured Social Hour - Last Sundays→ CEOS Business Networking Hour - First ThursdaysRegister here: www.patternsofpossibility.com/events#audhd #rejection #friendships
Sorry, I Missed This: The Everything Guide to ADHD and Relationships with Cate Osborn
When ADHD overwhelm hits, it's usually not because of one big event. It's the work project and your kid's school play and the relationship thing and everyone is out of clean socks... and now you're caught in a spiral.Today, Cate and our fabulous producer, Jessamine, dig into Reddit stories about work screw-ups, panic lying, and how pattern recognition can quietly turn everyday moments into emotional flashpoints in relationships. What actually can stop that spiral? For more on this topic: Listen: ADHD and workplace stressListen: Managing expectations in relationships (feat. KC Davis)Read: How to Keep House While Drowning, by KC DavisRead: Fair Play, by Eve RodskyFor a transcript and more resources, visit Sorry, I Missed This on Understood.org. You can also email us at sorryimissedthis@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Accountability can feel loaded with guilt for women with ADHD — especially after years of masking, late diagnosis, and being told you're “making excuses.” In this episode, Dr. J breaks down why accountability hits so hard, how hormones and executive function play a role, and the difference between excuses and explanations. For more on this topic: Listen: Punishment for ADHD symptomsRead: ADHD and shameFor a transcript and more resources, visit MissUnderstood on Understood.org. You can also email us at podcast@understood.org. ADHD Unstuck is a free, self-guided activity from Understood.org and Northwestern University designed to help women with ADHD boost their mood and take small, practical steps to get unstuck. In about 10 minutes, learn why mood spirals happen and get a personalized action plan of quick wins and science-backed strategies that work with your brain. Give it a try at Understood.org/GetUnstuck.Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Musician Gavin Friday joins Brendan on the 10th anniversary of David Bowie's death to talk about what Bowie meant to him as a teenager growing up in Dublin, having dinner with his hero (and not seeing eye to eye!) and his enduring influence.
In this meeting of The Late Diagnosis Club, Dr Angela Kingdon welcomes Julie M. Green, a writer, Autistic mother, and late-identified Autistic woman whose self-recognition unfolded through parenting. Julie's story begins not with her own diagnosis, but with her son's. As she learned how to support an Autistic child, she slowly began to recognise familiar patterns in herself — sensory sensitivity, rigidity, perfectionism, chronic illness, and lifelong shyness that had always been framed as personality flaws rather than neurodivergence.Together, Angela and Julie explore maternal guilt, masking across decades, self- and formal diagnosis, and what changes — and what doesn't — when you finally have language for your nervous system.
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
How did the seizure of a Russian oil tanker unfold? Is President Trump's crypto-profeteering a bigger scandal than Watergate? And with the United States turning 250, is this a time to reflect on the UK-US special relationship? Giles Whittell is joined by The Observer's Vanessa Thorpe and James Tapper, plus special guest - the host of The Making of Musk: Understood, Jacob Silverman. Today they battle it out to see who can pitch the story that should lead the news.You can find Understood wherever you get your podcasts, and here: https://link.mgln.ai/NewsMeetingxMoM **We want to hear what you think! Email us at: newsmeeting@observer.co.uk Follow us on Social Media: @ObserverUK on X @theobserveruk on Instagram and TikTok@theobserveruk.bsky.social on bluesky Host: Giles Whittell Producer: Casey Magloire Executive Producer: Jasper Corbett and Gary Marshall To find out more about The Observer:Subscribe to The Observer today and get access to:Our podcasts before anyone elseA daily edition, curated by our editors 7 days a weekPuzzles from the inventors of the cryptic crosswordRecipes for every occasionFree tickets to join Observer events in our newsroom or online Hosted on Acast. See acast.com/privacy for more information.
Debbie Reber — author, podcast host, and founder of Tilt Parenting — shares her unexpected journey of discovering her ADHD as an adult. She talks about the imposter syndrome that came with it, especially after years of writing about executive function and advocating for neurodivergent kids.Debbie explains how being extremely organized her whole life — hacking her ADHD without realizing it — kept her from seeing the signs sooner. She reflects on believing she “should” be someone who has natural balance, feels accomplished every day, and can simply unwind at night.She also opens up about growing up as the class clown, being told she was too loud, and how therapy is helping her untangle those early messages and better understand herself.For more on this topic: ADHD and imposter syndrome in womenPersonal story: What I do when imposter syndrome creeps in Check out Debbie's books, including Differently Wired: The Parent's Guide to Raising an Atypical ChildFor a transcript and more resources, visit ADHD Aha! on Understood.org. You can also email us at adhdaha@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In the fourth and final episode of Understood: The Making of Musk, host Jacob Silverman launches into Musk's ultimate quest, his desire to colonize Mars, and how he went from wanting to save earth to wanting to escape it. You'll hear the origin story of SpaceX. And hear from an astrophysicist who says Musk's plan is completely delusional. You can find Understood wherever you get your podcasts, and here: https://link.mgln.ai/FBxMoM4And be sure to follow the feed for even more stories that define our digital age.
What does Musk, father of 14, expect from his quote, “legion” of children? In episode 3 of Understood: The Making of Musk, host, Jacob Silverman unravels Musk's quest for genetic optimization, including alleged embryo screening, and his pronatalist views. And we hear from his estranged daughter, Vivian.You can find Understood wherever you get your podcasts, and here: https://link.mgln.ai/FBxMoM3
Where did Elon Musk's epic ambitions begin? In search of clues, the latest season of Understood: The Making of Musk returns to his sheltered youth in apartheid South Africa, a world engineered for white supremacy. In this second episode, host Jacob Silverman explores whether Musk's authoritarian streak traces back to his Canadian grandfather. Before Joshua Haldeman brought his family to South Africa, he made waves as part of the radical 1930s Technocracy movement. And while the two men's lives only overlapped for three years, we find echoes of Elon's worldview in Haldeman's pro-tech, anti-democratic ideology.You can find Understood wherever you get your podcasts, and here: https://link.mgln.ai/FBxMoM2
Original Release Date: November 13, 2025Live from Morgan Stanley's European Tech, Media and Telecom Conference in Barcelona, our roundtable of analysts discusses tech disruptions and datacenter growth, and how Europe factors in.Read more insights from Morgan Stanley.----- Transcript -----Paul Walsh: Welcome to Thoughts on the Market. I'm Paul Walsh, Morgan Stanley's European Head of Research Product. Today we return to my conversation with Adam Wood. Head of European Technology and Payments, Emmet Kelly, Head of European Telco and Data Centers, and Lee Simpson, Head of European Technology. We were live on stage at Morgan Stanley's 25th TMT Europe conference. We had so much to discuss around the themes of AI enablers, semiconductors, and telcos. So, we are back with a concluding episode on tech disruption and data center investments. It's Thursday the 13th of November at 8am in Barcelona. After speaking with the panel about the U.S. being overweight AI enablers, and the pockets of opportunity in Europe, I wanted to ask them about AI disruption, which has been a key theme here in Europe. I started by asking Adam how he was thinking about this theme. Adam Wood: It's fascinating to see this year how we've gone in most of those sectors to how positive can GenAI be for these companies? How well are they going to monetize the opportunities? How much are they going to take advantage internally to take their own margins up? To flipping in the second half of the year, mainly to, how disruptive are they going to be? And how on earth are they going to fend off these challenges? Paul Walsh: And I think that speaks to the extent to which, as a theme, this has really, you know, built momentum. Adam Wood: Absolutely. And I mean, look, I think the first point, you know, that you made is absolutely correct – that it's very difficult to disprove this. It's going to take time for that to happen. It's impossible to do in the short term. I think the other issue is that what we've seen is – if we look at the revenues of some of the companies, you know, and huge investments going in there. And investors can clearly see the benefit of GenAI. And so investors are right to ask the question, well, where's the revenue for these businesses? You know, where are we seeing it in info services or in IT services, or in enterprise software. And the reality is today, you know, we're not seeing it. And it's hard for analysts to point to evidence that – well, no, here's the revenue base, here's the benefit that's coming through. And so, investors naturally flip to, well, if there's no benefit, then surely, we should focus on the risk. So, I think we totally understand, you know, why people are focused on the negative side of things today. I think there are differences between the sub-sectors. I mean, I think if we look, you know, at IT services, first of all, from an investor point of view, I think that's been pretty well placed in the losers' buckets and people are most concerned about that sub-sector… Paul Walsh: Something you and the global team have written a lot about. Adam Wood: Yeah, we've written about, you know, the risk of disruption in that space, the need for those companies to invest, and then the challenges they face. But I mean, if we just keep it very, very simplistic. If Gen AI is a technology that, you know, displaces labor to any extent – companies that have played labor arbitrage and provide labor for the last 20 - 25 years, you know, they're going to have to make changes to their business model. So, I think that's understandable. And they're going to have to demonstrate how they can change and invest and produce a business model that addresses those concerns. I'd probably put info services in the middle. But the challenge in that space is you have real identifiable companies that have emerged, that have a revenue base and that are challenging a subset of the products of those businesses. So again, it's perfectly understandable that investors would worry. In that context, it's not a potential threat on the horizon. It's a real threat that exists today against certainly their businesses. I think software is probably the most interesting. I'd put it in the kind of final bucket where I actually believe… Well, I think first of all, we certainly wouldn't take the view that there's no risk of disruption and things aren't going to change. Clearly that is going to be the case. I think what we'd want to do though is we'd want to continue to use frameworks that we've used historically to think about how software companies differentiate themselves, what the barriers to entry are. We don't think we need to throw all of those things away just because we have GenAI, this new set of capabilities. And I think investors will come back most easily to that space. Paul Walsh: Emmet, you talked a little bit there before about the fact that you haven't seen a huge amount of progress or additional insight from the telco space around AI; how AI is diffusing across the space. Do you get any discussions around disruption as it relates to telco space? Emmet Kelly: Very, very little. I think the biggest threat that telcos do see is – it is from the hyperscalers. So, if I look at and separate the B2C market out from the B2B, the telcos are still extremely dominant in the B2C space, clearly. But on the B2B space, the hyperscalers have come in on the cloud side, and if you look at their market share, they're very, very dominant in cloud – certainly from a wholesale perspective. So, if you look at the cloud market shares of the big three hyperscalers in Europe, this number is courtesy of my colleague George Webb. He said it's roughly 85 percent; that's how much they have of the cloud space today. The telcos, what they're doing is they're actually reselling the hyperscale service under the telco brand name. But we don't see much really in terms of the pure kind of AI disruption, but there are concerns definitely within the telco space that the hyperscalers might try and move from the B2B space into the B2C space at some stage. And whether it's through virtual networks, cloudified networks, to try and get into the B2C space that way. Paul Walsh: Understood. And Lee maybe less about disruption, but certainly adoption, some insights from your side around adoption across the tech hardware space? Lee Simpson: Sure. I think, you know, it's always seen that are enabling the AI move, but, but there is adoption inside semis companies as well, and I think I'd point to design flow. So, if you look at the design guys, they're embracing the agentic system thing really quickly and they're putting forward this capability of an agent engineer, so like a digital engineer. And it – I guess we've got to get this right. It is going to enable a faster time to market for the design flow on a chip. So, if you have that design flow time, that time to market. So, you're creating double the value there for the client. Do you share that 50-50 with them? So, the challenge is going to be exactly as Adam was saying, how do you monetize this stuff? So, this is kind of the struggle that we're seeing in adoption. Paul Walsh: And Emmet, let's move to you on data centers. I mean, there are just some incredible numbers that we've seen emerging, as it relates to the hyperscaler investment that we're seeing in building out the infrastructure. I know data centers is something that you have focused tremendously on in your research, bringing our global perspectives together. Obviously, Europe sits within that. And there is a market here in Europe that might be more challenged. But I'm interested to understand how you're thinking about framing the whole data center story? Implications for Europe. Do European companies feed off some of that U.S. hyperscaler CapEx? How should we be thinking about that through the European lens? Emmet Kelly: Yeah, absolutely. So, big question, Paul. What… Paul Walsh: We've got a few minutes! Emmet Kelly: We've got a few minutes. What I would say is there was a great paper that came out from Harvard just two weeks ago, and they were looking at the scale of data center investments in the United States. And clearly the U.S. economy is ticking along very, very nicely at the moment. But this Harvard paper concluded that if you take out data center investments, U.S. economic growth today is actually zero. Paul Walsh: Wow. Emmet Kelly: That is how big the data center investments are. And what we've said in our research very clearly is if you want to build a megawatt of data center capacity that's going to cost you roughly $35 million today. Let's put that number out there. 35 million. Roughly, I'd say 25… Well, 20 to 25 million of that goes into the chips. But what's really interesting is the other remaining $10 million per megawatt, and I like to call that the picks and shovels of data centers; and I'm very convinced there is no bubble in that area whatsoever.So, what's in that area? Firstly, the first building block of a data center is finding a powered land bank. And this is a big thing that private equity is doing at the moment. So, find some real estate that's close to a mass population that's got a good fiber connection. Probably needs a little bit of water, but most importantly needs some power. And the demand for that is still infinite at the moment. Then beyond that, you've got the construction angle and there's a very big shortage of labor today to build the shells of these data centers. Then the third layer is the likes of capital goods, and there are serious supply bottlenecks there as well.And I could go on and on, but roughly that first $10 million, there's no bubble there. I'm very, very sure of that. Paul Walsh: And we conducted some extensive survey work recently as part of your analysis into the global data center market. You've sort of touched on a few of the gating factors that the industry has to contend with. That survey work was done on the operators and the supply chain, as it relates to data center build out. What were the key conclusions from that? Emmet Kelly: Well, the key conclusion was there is a shortage of power for these data centers, and… Paul Walsh: Which I think… Which is a sort of known-known, to some extent. Emmet Kelly: it is a known-known, but it's not just about the availability of power, it's the availability of green power. And it's also the price of power is a very big factor as well because energy is roughly 40 to 45 percent of the operating cost of running a data center. So, it's very, very important. And of course, that's another area where Europe doesn't screen very well.I was looking at statistics just last week on the countries that have got the highest power prices in the world. And unsurprisingly, it came out as UK, Ireland, Germany, and that's three of our big five data center markets. But when I looked at our data center stats at the beginning of the year, to put a bit of context into where we are…Paul Walsh: In Europe… Emmet Kelly: In Europe versus the rest. So, at the end of [20]24, the U.S. data center market had 35 gigawatts of data center capacity. But that grew last year at a clip of 30 percent. China had a data center bank of roughly 22 gigawatts, but that had grown at a rate of just 10 percent. And that was because of the chip issue. And then Europe has capacity, or had capacity at the end of last year, roughly 7 to 8 gigawatts, and that had grown at a rate of 10 percent. Now, the reason for that is because the three big data center markets in Europe are called FLAP-D. So, it's Frankfurt, London, Amsterdam, Paris, and Dublin. We had to put an acronym on it. So, Flap-D. Good news. I'm sitting with the tech guys. They've got even more acronyms than I do, in their sector, so well done them. Lee Simpson: Nothing beats FLAP-D. Paul Walsh: Yes. Emmet Kelly: It's quite an achievement. But what is interesting is three of the big five markets in Europe are constrained. So, Frankfurt, post the Ukraine conflict. Ireland, because in Ireland, an incredible statistic is data centers are using 25 percent of the Irish power grid. Compared to a global average of 3 percent.Now I'm from Dublin, and data centers are running into conflict with industry, with housing estates. Data centers are using 45 percent of the Dublin grid, 45. So, there's a moratorium in building data centers there. And then Amsterdam has the classic semi moratorium space because it's a small country with a very high population. So, three of our five markets are constrained in Europe. What is interesting is it started with the former Prime Minister Rishi Sunak. The UK has made great strides at attracting data center money and AI capital into the UK and the current Prime Minister continues to do that. So, the UK has definitely gone; moved from the middle lane into the fast lane. And then Macron in France. He hosted an AI summit back in February and he attracted over a 100 billion euros of AI and data center commitments. Paul Walsh: And I think if we added up, as per the research that we published a few months ago, Europe's announced over 350 billion euros, in proposed investments around AI. Emmet Kelly: Yeah, absolutely. It's a good stat. Now where people can get a little bit cynical is they can say a couple of things. Firstly, it's now over a year since the Mario Draghi report came out. And what's changed since? Absolutely nothing, unfortunately. And secondly, when I look at powering AI, I like to compare Europe to what's happening in the United States. I mean, the U.S. is giving access to nuclear power to AI. It started with the three Mile Island… Paul Walsh: Yeah. The nuclear renaissance is… Emmet Kelly: Nuclear Renaissance is absolutely huge. Now, what's underappreciated is actually Europe has got a massive nuclear power bank. It's right up there. But unfortunately, we're decommissioning some of our nuclear power around Europe, so we're going the wrong way from that perspective. Whereas President Trump is opening up the nuclear power to AI tech companies and data centers. Then over in the States we also have gas and turbines. That's a very, very big growth area and we're not quite on top of that here in Europe. So, looking at this year, I have a feeling that the Americans will probably increase their data center capacity somewhere between – it's incredible – somewhere between 35 and 50 percent. And I think in Europe we're probably looking at something like 10 percent again. Paul Walsh: Okay. Understood. Emmet Kelly: So, we're growing in Europe, but we're way, way behind as a starting point. And it feels like the others are pulling away. The other big change I'd highlight is the Chinese are really going to accelerate their data center growth this year as well. They've got their act together and you'll see them heading probably towards 30 gigs of capacity by the end of next year. Paul Walsh: Alright, we're out of time. The TMT Edge is alive and kicking in Europe. I want to thank Emmett, Lee and Adam for their time and I just want to wish everybody a great day today. Thank you.(Applause) That was my conversation with Adam, Emmett and Lee. Many thanks again to them. Many thanks again to them for telling us about the latest in their areas of research and to the live audience for hearing us out. And a thanks to you as well for listening. Let us know what you think about this and other episodes by living us a review wherever you get your podcasts. And if you enjoy listening to Thoughts on the Market, please tell a friend or colleague about the podcast today.
Communication isn't about talking more; it's about connecting better.Malcom Forbes – “The art of communication lies in the listening”. In today's episode, Debi will share about the art of communication and how to learn the value of connection.Most conflicts are not about what we say, but how we say it.Words make up only part of our communicationOur Tone, the timing, and the delivery matter as much as the contentThe tone we use can far outweigh any words we say while we are trying to communicate. The Phone has become a Barrier in RelationshipsWhen phones are always present:Our eye contact decreasesThe interruptions increaseAnd our conversations become shallower Listening is the greatest gift you can give someone.It says: You matterYour words matter Your experience matters. CONNECT WITH DEBIDo you feel stuck? Do you sense it's time for a change, but are unsure where to start or how to move forward? Schedule a clarity call!Free Clarity Call: https://calendly.com/debironca/free-clarity-callWebsite – https://www.debironca.comInstagram - @debironcaEmail – info@debironca.com Check out my online course!Your Story's Changing, Finding Purpose in Life's Transitionshttps://course.sequoiatransitioncoaching.com/8-week-programThe Family Letter by Debi Ronca – International Best Sellerhttps://www.amazon.com/dp/B07SSJFXBD
Travel is often pictured as excitement, new sights, and adventure. But for many, it can feel overwhelming, exhausting, or even impossible before a trip begins. For individuals navigating ADHD, anxiety, autism, or learning differences, the unfamiliar sounds, routines, and expectations of travel can make even the simplest journey feel heavy. And yet, these truths are rarely spoken with honesty, empathy, or care. In this deeply moving episode of Speaking of Travel, we sit down with Dr. Andrew Kahn, licensed psychologist, Associate Director at Understood.org, and a national voice on mental health and neurodiversity. Dr. Kahn brings more than 25 years of professional experience, along with his own lived experience as someone with learning and thinking differences. The result is a conversation that is both profoundly human and deeply practical, full of insight, compassion, and wisdom for travelers of all kinds. Dr. Kahn reminds us that preparation is an act of love for ourselves and for those we care about. He shares stories of patience and understanding that ripple outward, turning moments of stress into experiences of connection, growth, and joy. This conversation is an invitation to travel differently, not faster, not farther, but more gently. It's about creating space for empathy, for self-compassion, and for recognizing the courage it takes to step into the world when it feels unpredictable or challenging. Only on Speaking of Travel! Tune in. Thanks for listening to Speaking of Travel! Visit speakingoftravel.net for travel tips, travel stories, and ways you can become a more savvy traveler.
This is our final episode of the year, and today I'm doing something I always love—looking back and reflecting. I'll be revealing the Top 10 most downloaded episodes of 2025, sharing why these conversations resonated so deeply, and highlighting a couple of episodes and guests that were personal favorites for me. Top 10 Mastering the Art of Validation: How to Make People Feel Seen and Understood with Caroline Fleck, PhD How to Raise Capable and Resilient Children with Sheryl Ziegler Raising Emotionally Aware Boys Who Turn into Strong Men with Tosha Schore Reclaiming Your Inner Child: Heal the Past and Live Fully with Nina Mongendre The Family Dynamic: The Mystery of Sibling Success with Susan Dominus Why Kids Are Struggling—and How to Help Them Thrive with Ned Johnson Stop People Pleasing: How to Value Yourself and Your Needs with Amy Wilson How to Recover from Burnout by Regulating Your Emotions and Nervous System with Michelle Grosser How to Connect Deeply With Yourself and Others with Kristy Lauricella 7 Ways to Improve Well-Being, Reduce Stress, and Stay Grounded in Difficult Times with Anna Seewald and Laura Froyen LINKS AND RESOURCES Support the podcast by making a donation (suggested amount $15) 732-763-2576 call to leave a voicemail. info@authenticparenting.com Send audio messages using Speakpipe. Join the Authentic Parenting Community on Facebook. Work w/Anna. Listeners get 10% off her services. Podcast Production by Aminur: https://www.upwork.com/freelancers/~019855d91718719d11
Send us a textNARRATOR:Welcome back to the Metropolitan Museum of Toys and Childhood Artifacts—where the lights dim, the doors lock, and the exhibits do what exhibits are not supposed to do.[SFX: A security door clicks shut.]NARRATOR (cont.):Tonight, our night watchman makes his rounds with a thermos of tea, a sensible flashlight, and the quiet confidence of a man who believes no object smaller than a breadbox could possibly ruin his evening.[SFX: Footsteps. Keys jingle softly.]NIGHT WATCHMAN (EBENEZER SMITH):All right, Mr. Museum… let's see what you've got for me tonight. No juggling dolls. No ventriloquist dummies practicing stand-up. No remote-control cars attempting a heist.[SFX: He stops walking. The ambience hushes slightly.]NIGHT WATCHMAN (cont.):Oh. The Digital Fads case.NARRATOR:A glass display case labeled “Pocket Companions: 1990s–2000s.” Inside: a pager, a flip phone, a tiny handheld game, and—resting on a velvet stand like a jewel—an egg-shaped plastic keychain with three little buttons.[SFX: A tiny electronic “BEEP-BEEP!”]NIGHT WATCHMAN:…No.[SFX: “BEEP! BEEP! BEEP!” intensifies.]NIGHT WATCHMAN (cont.):Absolutely not. We are not doing this tonight. I remember you. I remember the… the neediness.NARRATOR:The night watchman leans closer. The little screen glows with a pixelated face that looks… concerned. Accusatory. Dramatic.[SFX: “BEEP!” a little sadder now.]NIGHT WATCHMAN:Fine. All right. Rule of the museum: if you're going to speak, you tell me your name and what you are. No mysterious beeping from the shadows. Understood?[SFX: One polite beep. Then a short, proud chime.]Support the showThank you for experiencing Celebrate Creativity.
If you have ADHD or autism, research shows you're at a much higher risk for developing chronic pain — a connection many doctors and patients still don't know about. In this episode of Hyperfocus, we talk with a doctor who's trying to change that.Dr. Michael Lenz, a Wisconsin-based pain specialist, explains what the medical community is discovering about the connection between ADHD, autism, and chronic pain, including conditions like fibromyalgia and migraines. He also shares stories from his practice, including times when treating a patient's ADHD unexpectedly improved their chronic pain symptoms.For more on this topic: Dr. Lenz's podcast and bookThe Weak Link: Hypotonia in Infancy and Autism Early Identification - PMCADHD-pain: Characteristics of chronic pain and association with muscular dysregulation in adults with ADHDOrder friend of the show Craig Thomas' book NIH study on joint hypermobility For a transcript and more resources, visit Hyperfocus on Understood.org. You can also email us at hyperfocus@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week, we're sharing a powerful episode from our friends at Hyperfocus — a deeply personal story with its own “aha” moments. Inattentive ADHD is often missed, especially in boys who don't fit the typical ADHD stereotype. Brandon Saiz shares his later-in-life diagnosis and what it meant to have been overlooked for so long. If you're not already listening to Hyperfocus, check it out here.Content warning: Mentions of suicideFor more on this topic: Read: The 3 types of ADHDListen: The “devastating” findings of a decades-long ADHD studyFollow: Brandon Saiz on Substack For a transcript and more resources, visit our friends at Hyperfocus on Understood.org. You can also email us at adhdaha@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Mary: The Lord's Servant Luke 1.26-56 Gordon Dabbs, PhD Advent often, for believers, refers to is the season leading up to Christmas. The season of waiting. Hoping. Trusting. Advent is the season of almost… but not yet… What is coming upon the world is the Light of the World. It is Christ. That is the comfort of it. The challenge of it is that it has not come yet. Only the hope for it has come, only the longing for it. In the meantime we are in the dark, and the dark, God knows, is also in us. We watch and wait for a holiness to heal us and hallow us, to liberate us from the dark. Advent is like the hush in a theater just before the curtain rises. ~ Frederick Buechner Christmas is not about how we find God. It's about how God found us. And He chose to arrive in a way no one expected. Mary was from Nazareth. Young—likely a teenager. Poor. Unmarried. Engaged to a carpenter of like socio-economic means. When the world says, “Not enough,” God says, “Watch this.” Mary wasn't chosen because she was impressive. She was chosen because she was available. Luke 1.26-29 (ESV) In the sixth month the angel Gabriel was sent from God to a city of Galilee named Nazareth, to a virgin betrothed to a man whose name was Joseph, of the house of David. And the virgin's name was Mary. And he came to her and said, “Greetings, O favored one, the Lord is with you!” But she was greatly troubled at the saying, and tried to discern what sort of greeting this might be. Yes, it is entirely possible to be right in the center of God's will and still feel confused and disturbed at the same time. Luke 1.35,37 (ESV) The Holy Spirit will come upon you, and the power of the Most High will overshadow you. . . For nothing will be impossible with God. Mary speaks words that change the direction of history… Luke 1.38 (ESV) Behold, I am the servant of the Lord; let it be to me according to your word. It is very easy to care more about the approval of others than the approval of God. We want to be liked. Understood. Avoid tension. Luke 1.45 (NLT) You are blessed because you believed that the Lord would do what he said. Christmas is the story of what God can do when someone says “yes.” May we have the courage to do the same.Subscribe to PRESTONCREST - with Gordon Dabbs on Soundwise
When I read The Art of Money by Bari Tessler, it felt as if I had finally found someone who UNDERSTOOD me and what I was experiencing with money. For anyone who has felt that either traditional financial literacy was missing the emotional element or the manifestation money world was missing some grounding, you will love learning about how Bari helps people truly create a more mindful, empowered relationship with money by using somatic-based, values-forward, gentle money practices. In this episode, we talked: how she became the OG financial therapist and started a new industry of somatic-based money work back in the 90s!!!wisdom and insight into how to navigate money in relationship & how to start having regular (albeit sometimes uncomfortable) money convos with your partnerwhat a "money koan" is and how we can use this magical concept to navigate even the stickiest money challenges that may arise in our lives (especially during transitions)Bari's perspective on debt, how she's used it in her own business and how our relationship with money impacts our business decisions & outcomesthe never-before-shared announcement of where Bari's taking her business next after 25 years of teaching money work!AND AND AND! For the first time ever, Bari has released her signature money program The Art of Money as a self-guided option at an extremely affordable rate--and if you purchase before 12/31/25, you'll also get a complementary Intuitive Money Guidance session with me to hold space for you as you heal your money wounds, develop stronger confidence in your money practices and learn to create the empowered relationship with money you know you desire. Get The Art of Money program here (for just $199!!!!!) by 12/31 and get your BONUS Intuitive Money Guidance session by simply emailing a screenshot of your receipt to support@emilyelizamoyer.com. Use the 1:1 Intuitve Money Guidance session to: Clear the deep subconscious blocks at the root that your money stories uncoveredCo-regulate your nervous system & create safety to finally face your numbers.Align your business & offers to your soul's truth, not your ego's "shoulds"Body double to turn knowledge overwhelm into simple, actionable money stepsHold space for you to practice attuning to your body's signals, wisdom and intuitive insightBuy the Art of Money book here, check out her other resources like her Money Dates Workshop on her website and follow along the next chapter of her business on Instagram.
In this listener Q&A, Cate tackles two wildly relatable ADHD questions: sudden sensory discomfort during intimacy, and the maddening cycle of not being able to start a task… then not being able to stop. From sensory overload and burnout to hyperfocus, momentum anxiety, and emotional regulation, Cate breaks down what's going on and how to navigate it without losing it. Thanks to our listeners for these deeply ADHD-coded questions! Keep 'em coming.For more on this topic: Listen: ADHD and sensory overwhelmListen: ADHD sensory challenges and sexRead: ADHD and hyperfocusFor a transcript and more resources, visit Sorry, I Missed This on Understood.org. You can also email us at sorryimissedthis@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this final episode of Climbing the Walls, Danielle explores the frustration women with ADHD feel toward a medical community that can't answer their questions about how hormones impact ADHD. Searching for answers, they turn to online communities for information and support.Danielle talks to experts about the latest research on ADHD in women and what the future of treatment could look like.More on this story:A guide to hormones and ADHDADHD and periodsADHD and menopauseFor a transcript and more resources, visit Climbing the Walls on Understood.org. You can also email us at podcast@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Sorry, I Missed This: The Everything Guide to ADHD and Relationships with Cate Osborn
In this listener Q&A, Cate tackles two wildly relatable ADHD questions: sudden sensory discomfort during intimacy, and the maddening cycle of not being able to start a task… then not being able to stop. From sensory overload and burnout to hyperfocus, momentum anxiety, and emotional regulation, Cate breaks down what's going on and how to navigate it without losing it. Thanks to our listeners for these deeply ADHD-coded questions! Keep 'em coming.For more on this topic: Listen: ADHD and sensory overwhelmListen: ADHD sensory challenges and sexRead: ADHD and hyperfocusFor a transcript and more resources, visit Sorry, I Missed This on Understood.org. You can also email us at sorryimissedthis@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On episode 5 of Climbing the Walls, Danielle attends an ADHD camp in Michigan and hears stories from several women about being diagnosed with ADHD later in life. Many of them have one thing in common.More on this story: I'm sure my mom has ADHD. Should I tell her?What is the ADHD tax?“Who are we missing?” One doctor's lifelong fight for women with ADHDFor a transcript and more resources, visit Climbing the Walls on Understood.org. You can also email us at podcast@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
How the Vikings understood the power of the blitz and the Cowboys did not full 752 Mon, 15 Dec 2025 17:07:46 +0000 B3gzgYP7UUTirBtY8GSaaFeg5zjkiLsU nfl,minnesota vikings,dallas cowboys,sports The K&C Masterpiece nfl,minnesota vikings,dallas cowboys,sports How the Vikings understood the power of the blitz and the Cowboys did not K&C Masterpiece on 105.3 The Fan 2024 © 2021 Audacy, Inc. Sports False https://player.ampe
A Regnum Christi Daily Meditation. Sign up to receive the text in your email daily at RegnumChristi.com
Back by popular demand… it's Ange Nolan! Ange returns to ADHD Aha! to share how her ADHD journey has evolved since we last spoke. That includes her decision to study disability theology and help make worship spaces more supportive for neurodivergent people. Going back to school brought up old memories of past academic struggles. Ange talks openly about navigating those feelings with more clarity and self-understanding. She also gives an update on her personal life — this time, celebrating a calm, steady relationship that looks very different from the intense dynamics she experienced in the past.For more on this topic: Ange's first interview: ADHD, loving intensely, and impulsivityA guide to ADHD and emotionsFor a transcript and more resources, visit ADHD Aha! on Understood.org. You can also email us at adhdaha@understood.org. Understood.org is a nonprofit organization dedicated to empowering people with learning and thinking differences, like ADHD and dyslexia. If you want to help us continue this work, donate at understood.org/give Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
How Secular Government Became a Confessional Church Manage our Thought Sins unto Secular Salvation https://youtu.be/455CizDxGxU @WhiteStoneName Imagination as thinking: confirmation bias, information & meaning https://www.youtube.com/live/3TBDRuMJbEQ?si=8n18pB9IjmAt7wnw Marcus Shera on Modernity, Consistency and Completeness https://youtu.be/jKdtA96Lh8g?si=2Ewx2fzpP0Vl3vUS https://newsfromuncibal.substack.com/p/disinformation-hate-speech-and-the @ChrisWillx Being Damaged Is Not A Personality Trait - Freya India https://youtu.be/wmU7VVxhERw?si=ZKy7wprZ7f9e_wze https://richardbeck.substack.com/p/the-antichrist-and-the-katechon https://richardbeck.substack.com/p/the-antichrist-and-the-katechon-119 https://richardbeck.substack.com/p/rene-girard-and-moral-influence-3b3 https://richardbeck.substack.com/p/rene-girard-and-moral-influence-b22 https://richardbeck.substack.com/p/rene-girard-and-moral-influence-389 https://richardbeck.substack.com/p/rene-girard-and-moral-influence https://richardbeck.substack.com/p/the-antichrist-and-the-katechon-925 https://www.livingstonescrc.com/give Register for the Estuary/Cleanup Weekend https://lscrc.elvanto.net/form/94f5e542-facc-4764-9883-442f982df447 Paul Vander Klay clips channel https://www.youtube.com/channel/UCX0jIcadtoxELSwehCh5QTg https://www.meetup.com/sacramento-estuary/ My Substack https://paulvanderklay.substack.com/ Bridges of meaning https://discord.gg/mQGdwNca Estuary Hub Link https://www.estuaryhub.com/ There is a video version of this podcast on YouTube at http://www.youtube.com/paulvanderklay To listen to this on ITunes https://itunes.apple.com/us/podcast/paul-vanderklays-podcast/id1394314333 If you need the RSS feed for your podcast player https://paulvanderklay.podbean.com/feed/ All Amazon links here are part of the Amazon Affiliate Program. Amazon pays me a small commission at no additional cost to you if you buy through one of the product links here. This is is one (free to you) way to support my videos. https://paypal.me/paulvanderklay Blockchain backup on Lbry https://odysee.com/@paulvanderklay https://www.patreon.com/paulvanderklay Paul's Church Content at Living Stones Channel https://www.youtube.com/channel/UCh7bdktIALZ9Nq41oVCvW-A To support Paul's work by supporting his church give here. https://tithe.ly/give?c=2160640 https://www.livingstonescrc.com/give
In this heartfelt and practical talk, Dean Graziosi shares the eight lessons that make him a better dad — and a more connected human. With humor, vulnerability, and storytelling, he reveals how to truly connect with your kids: by entering their world, listening to understand, finding root causes behind behavior, and modeling authenticity, enthusiasm, and empathy. These timeless lessons apply to parenting, leadership, and life itself.JOIN QOD CLUB. Ready to stop growing alone? Join QOD Club and connect with people who actually get you. Get weekly Monday Mentorship Calls, Wednesday Book Club discussions, and brand-new business, mindset, and social media trainings coming soon. Start your 30-day trial for only $9!GET MY TOP 28 BOOK RECOMMENDATIONS: Click here to get your free copy of “28 Books That Will Rewire Your Mindset for Success and Self-Mastery” curated by yours truly!Source: 8 Success Lessons Every Parent Should KnowHosted by Sean CroxtonFollow me on InstagramSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, we conclude our series on the garden of our hearts and explore what it means to cultivate unity—which is especially important as we enter the busy and often stressful holiday season. We talk about the difference between anger and contempt and how contempt can not only fracture relationships but also plant seeds of division in our hearts. We also reflect on how we can respond to discord with humility, a holy curiosity, and a genuine desire to understand. Ultimately, unity begins with love, bears good fruit, and reflects the presence of Christ within us. Heather's One Thing - The Cheesecloth Turkey Basting Method (Example Here) Sister Miriam's One Thing - College Volleyball Playoffs (especially Nebraska) Michelle's One Thing - Twinkling Trees from Walmart Announcement: Our Advent Study begins December 1st, 2025! Journal Questions: Where in my heart am I harboring contempt? What groups of people or individuals do I see as worthless? When was the last time that someone treated you with contempt? How did that impact you? How am I seeking to understand people with different opinions? How is the Lord inviting me to refine and cultivate my tone to speak love to others? When faced with division and disunity, are the movements of my heart and my external actions congruent? Discussion Questions: What differences have you observed between conformity and unity? What differences have you observed between anger and contempt? When are you tempted to roll my eyes, sneer, act with hostility, speak with sarcasm? When is it hardest for you to cultivate unity? Quote to Ponder: "To understand one another and to grow in charity and truth, we need to pause, to accept and listen to one another. In this way, we already begin to experience unity. Unity grows along the way; it never stands still. Unity happens when we walk together." (Pope Francis, Homily at second Vespers on the solemnity of the conversion of St. Paul, Jan. 25, 2015) Scripture for Lectio: "I therefore, a prisoner for the Lord, beg you to lead a life worthy of the calling to which you have been called, with all lowliness and meekness, with patience, forbearing one another in love, eager to maintain the unity of the Spirit in the bond of peace. There is one body and one Spirit… one Lord, one faith, one baptism." (Ephesians 4:1-6) Sponsor - Glory: Women's Gathering: If you're feeling like your spiritual life could use a little more support than podcasts and online formation can offer, you need to check out this week's sponsor, the Glory: Women's Conference hosted by Steubenville Conferences in partnership with Heather Khym. We want to invite you to join Heather, Michelle, and our dear friends Debbie Herbeck, Sarah Kaczmarek, Monica Richards, and Fr. Dave Pivonka TOR this coming June 5-7 in Steubenville, Ohio, as we gather with women across generations and seek God's restoration and healing. This gathering will include talks, worship, prayer experiences, and the opportunity to interact with fellow Abiding Together listeners and new friends from all over who will be flying in. Heather and Michelle would absolutely love to meet you. Whether you come with your Abiding Together small group, with a close friend, or on your own, we can't wait to gather in fellowship with you. Registration is now open for the Glory: Women's Conference. For early bird pricing of only $259, register by December 31st. The price will go up in the new year. Visit steubenvilleconferences.com/events/glory for more information or to register! Chapters: 00:00 Glory: Women's Gathering 01:31 Intro 02:22 Advent Announcement 03:14 Welcome 05:19 Guiding Quote and Scripture Verse 06:19 Distinguishing Anger vs Contempt 11:28 Living Like We are One Body in Christ 13:48 Seeking to Understand Rather than be Understood 18:22 The Power of Our Tone of Voice 20:35 Examining the Fruit in Our Lives 22:49 Maturing Spiritually 27:06 Repairing Strained Relationships 29:08 One Things