Podcasts about Kiwi

  • 4,320PODCASTS
  • 17,593EPISODES
  • 38mAVG DURATION
  • 2DAILY NEW EPISODES
  • Jan 29, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Kiwi

    Show all podcasts related to kiwi

    Latest podcast episodes about Kiwi

    RNZ: Checkpoint
    What happened to the seaweed pest threatening native species?

    RNZ: Checkpoint

    Play Episode Listen Later Jan 29, 2026 4:10


    There's a mystery lurking in the waters of northern New Zealand. What happened to a highly invasive seaweed pest that smothered huge areas of seabed, threatening native species and the Kiwi way of life? Northland reporter Peter de Graaf tried to find out.

    RNZ: The Detail
    A Kiwi on Ukraine's frontline

    RNZ: The Detail

    Play Episode Listen Later Jan 28, 2026 22:16


    New Zealander Khol Gillies was fighting in Ukraine when he was badly injured, and he spent days in a bunker waiting to be evacuated Three months after an injury that cost him his leg, Khol Gillies tells RNZ's Lisa Owen the story of how he ended up fighting for Ukraine, and what it was like waiting in a bunker for days to be evacuatedGuests:Khol GilliesJasmine Gillies Learn More:To contribute to Khol's recovery, find his Givealittle hereFind The Detail on Newsroom or RNZ Go to this episode on rnz.co.nz for more details

    RNZ: Checkpoint
    Kiwi soldier speaks on losing leg after fighting in Ukraine

    RNZ: Checkpoint

    Play Episode Listen Later Jan 28, 2026 14:34


    A New Zealander who had his leg amputated after being severely injured while fighting in Ukraine says he sung Aotearoa's national anthem to keep himself going during his excruciating rescue. Khol Gillies had to wait days to be evacuated from the battlefield because fierce fighting made it virutally impossible to reach him. Gillies, originally from Hawkes Bay had been in Ukraine for six months fighting as a volunteer. Gillies spoke to Lisa Owen.

    Lead on Purpose with James Laughlin
    Robbie Paul on The Truth About Venture Capital No One Explains Clearly

    Lead on Purpose with James Laughlin

    Play Episode Listen Later Jan 27, 2026 69:50


    In this episode of Lead On Purpose, I sit down with Robbie Paul from Icehouse Ventures to demystify venture capital and what it really takes to back world class founders. We unpack risk and reward, bootstrapping versus raising capital, the traits behind standout leadership, and why trust and honesty are non negotiable in a long game industry.What we cover:What venture capital actually is, and why it has a different risk and return profile to KiwiSaver and index fundsBootstrapping versus taking capital, and how external funding can level up ambition, clarity, and accountabilityBehind the scenes stories from Kiwi success plays like Crimson and Power by Proxy, and what makes a founder worth backingThe reality of failure in venture, why safe bets rarely create outsized outcomes, and what successful founders do differentlyHow to pitch and build relationships the right way, plus why honesty beats hype when trust is the real currencyIf you want a clearer understanding of how great companies are built, why people matter more than ideas, and what long term leadership really looks like, this conversation will stretch your thinking.You can learn more about IceHouse Ventures here - https://www.icehouseventures.co.nz/investorsConnect with Robbie on LinkedIn here - https://www.linkedin.com/in/robertjpaul/?originalSubdomain=nzIf you're interested in having me deliver a keynote or workshop for your team contact Caroline at caroline@jjlaughlin.comWebsite: https://www.jjlaughlin.com YouTube: https://www.youtube.com/channel/UC6GETJbxpgulYcYc6QAKLHA Facebook: https://www.facebook.com/JamesLaughlinOfficial Instagram: https://www.instagram.com/jameslaughlinofficial/ Apple Podcast: https://podcasts.apple.com/nz/podcast/life-on-purpose-with-james-laughlin/id1547874035 Spotify: https://open.spotify.com/show/3WBElxcvhCHtJWBac3nOlF?si=hotcGzHVRACeAx4GvybVOQ LinkedIn: https://www.linkedin.com/in/jameslaughlincoaching/James Laughlin is a High Performance Leadership Coach, Former 7-Time World Champion, Host of the Lead On Purpose Podcast and an Executive Coach to high performers and leaders. James is based in Christchurch, New Zealand.Send me a personal text messageJoin me at the 2026 Goal-setting Workshop here - jjlaughlin.com/2026goals - If you're interested in booking me for a keynote or workshop, contact Caroline at caroline@jjlaughlin.comSupport the show

    NZ Tech Podcast
    Inside SYOS Aerospace: Rapid Development, Engineering Culture, and Global Impact

    NZ Tech Podcast

    Play Episode Listen Later Jan 27, 2026 46:23


    Join host Paul Spain as he sits down with Sam Vye of Syos Aerospace, a trailblazing New Zealand tech company shaking up the unmanned vehicle industry. Hear how Syos rapidly prototypes cutting-edge drones, outpaces global defence giants, and scales from Kiwi innovation to international contracts. Discover their unique engineering culture, rapid product development, and what it takes to compete with the world's biggest aerospace companies.Thanks to our Partners One NZ, Workday, 2degrees, HP, Spark and Gorilla Technology

    Proper True Yarn
    Russ: Lost Naked in Christchurch & the 45-Hour Bender

    Proper True Yarn

    Play Episode Listen Later Jan 27, 2026 7:07


    Russ tells the story of a “quick trip” to Christchurch that turned into a 45-hour bender: pulling a sickie, smashing red wines on the plane, his mate getting thrown in detox at Macky G, ending up naked in fluffy slippers wandering the streets at 4am, getting scooped by the boys in an Uber, then backing it up for another full day on the piss. Pure chaos, peak Kiwi mischief, absolute Proper True Yarn territory.#propertrueyarn Hosted on Acast. See acast.com/privacy for more information.

    The Debrief with Dave O'Neil
    Brendhan Lovegrove, Mt Eliza

    The Debrief with Dave O'Neil

    Play Episode Listen Later Jan 25, 2026 39:59


    New Zealand comic Brendhan Lovegrove! Plenty here for the Kiwi expats in Aus. And for the Australians seeing so many New Zealand comics in Australia. Heard of The Exponents? Check out Somehow UN-Related wherever you're listening or go to Nearly.com.au About The Debrief Original theme music by Kit Warhurst. Hear the making of The Debrief theme song. Artwork created by Stacy Gougoulis. Co-produced by Nearly Media Looking for another podcast? The Junkees with Dave O'Neil & Kitty Flanagan - The sweet and salty roundabout! Junk food abounds!rSee omnystudio.com/listener for privacy information.

    Between Two Beers Podcast
    Owen Eastwood: The Unconventional Kiwi Coaching The World's Greatest Teams

    Between Two Beers Podcast

    Play Episode Listen Later Jan 25, 2026 118:57


    He is the secret weapon behind Chelsea FC, the England Football team, the South African Proteas, and NATO's Command Group. Yet, Owen Eastwood has no coaching qualifications, describes himself as an "imposter," and often refuses to watch the actual games his teams play.In this episode, we sit down with the most in-demand performance coach you've never heard of.We discuss the "mystical" session where he made hardened NATO generals cry, why he stays silent for three months before coaching a new team, and the "ancient code" of belonging that now drives billion-dollar organisations.This episode is brought to you by our proud sponsors TAB. Got a hunch? Get your bet on. Steve and Seamus are proud to be dressed by Barkers. Hosted on Acast. See acast.com/privacy for more information.

    OnlyFeehans
    #262 Just Like a Kiwi (w/ Irene Morales)

    OnlyFeehans

    Play Episode Listen Later Jan 22, 2026 59:38


    FOLLOW IRENE! https://www.youtube.com/@freakshitshow https://linktr.ee/irenesmorales https://www.instagram.com/irenesmorales/ WE HAVE MERCH! https://onlyfeehans.dashery.com/ CHECK OUT KERRYN'S NEW SPECIAL ON OFTV https://of.tv/c/kerryn-feehan FOLLOW THE SHOW: Instagram: https://www.instagram.com/onlyfeehans/ Patreon: https://www.patreon.com/OnlyFeehans Apple: https://podcasts.apple.com/us/podcast/onlyfeehans/id1538154933 Spotify: https://open.spotify.com/show/5ojWPy3lzm1P18ePxAjGFB?si=a9ca6d6a493e474f YouTube: https://www.youtube.com/@onlyfeehans FOLLOW KERRYN: Instagram: https://www.instagram.com/kerrynfeehan/ Twitter: https://twitter.com/FeehanKerryn YouTube: https://www.youtube.com/@onlyfeehans Producer & Editor: Tim McLaughlin https://www.instagram.com/hot_comic69/ https://www.youtube.com/@GreatHangPodcast

    Green And Gold Rugby
    The Dropped Kick-Off - Cluckin' for Crichton

    Green And Gold Rugby

    Play Episode Listen Later Jan 22, 2026 33:34


    They’re nervous, we’re excited.Angus Crichton will Steggles for industrialised Kiwi meat - itself a bit yellow in colour as they sack Razor clam Robertson - as he becomes the 3rd Rooster of recent times to beg for the upgrade from avian to macropod. Nick and Natho are on board to ride the chocobo of delusion in this bloated metaphor. They chirp: Crichton! How good? Our potential sumptuous backline for ‘27 Who else do you want - let’s get greedy! Razor getting the cut And fightin’ flowers at Tahs training See omnystudio.com/listener for privacy information.

    ZM's Bree & Clint
    ZM's Bree & Clint Podcast - 21st January 2026

    ZM's Bree & Clint

    Play Episode Listen Later Jan 21, 2026 68:12 Transcription Available


    Unhinged behaviour. Forget the 2016 trend, how great was the year 2006? Bree doesn't know the difference between the Kiwi and Aussie accent. NZ's most expensive pie. See omnystudio.com/listener for privacy information.

    Peggy Fo Show
    EP104|國外豔遇如何讓李函心動答應交往?沒有80萬,不用肖想去時裝周?! 時尚產業的潛規則?- Kiwi李函

    Peggy Fo Show

    Play Episode Listen Later Jan 21, 2026 49:10


    前一陣子看到了李函拍攝的HPV人類乳突病毒衛教宣導影片,很驚訝竟然有宣導影片可以拍得那麼有質感。 自己認真查了一下才發現原來對於HPV有那麼多的迷思! 所以今天我們再次邀請到Kiwi李函來和我們聊聊上一次我們沒有聊完的話題~ 一起來聽聽李函在時裝周和時尚圈遇到的不同狀況,還有出國遇到豔遇後發生的意外驚喜吧

    The Insert Credit Show
    Ep. 424 - Insert Credit Annual #7

    The Insert Credit Show

    Play Episode Listen Later Jan 19, 2026 159:35


    As has become tradition, Insert Credit ranks the best games of all time. Hosted by Alex Jaffe, with Frank Cifaldi, Ash Parrish, and Brandon Sheffield. Edited by Esper Quinn, original music by Kurt Feldman. Watch episodes with full video on YouTube Discuss this episode in the Insert Credit Forums SHOW NOTES: Digital Eclipse Final Fantasy VIII “If I die here alone, a miserable death, no one will bother to pick up my bones.” Sonic the Hedgehog Arator the Redeemer World of WarCraft Demonschool Ryu Number #IDARB Street Fighter: 30th Anniversary Collection Aetherdrift Bestow Greatness (the supposed Ash card) Johnny Silverhand Yakuza 0 EarthBound Super Mario Bros. 3 Panzer Dragoon II: Zwei Doom OutRun: Online Arcade Silent Hill 1: Frank's Mom's Favorite Game (08:56) Super Mario series 2: Ash's Cousin Eric's Favorite Game (14:07) Ms. Pac-Man Sega Genesis Mortal Kombat series Pac-Man 2: The New Adventures 3: Brandon's Partner Christina's Favorite Game (19:45) Phoenix Wright: Ace Attorney Nintendo DS Grand Theft Auto series Game Boy Advance Ivy the Kiwi? 4: Frank's Co-host Ash's Favorite Game (25:08) Dragon Age: Inquisition Overwatch Bayonetta Sonic the Hedgehog Part II Sonic the Hedgehog 3 & Knuckles Sega Technical Institute Mark Cerny 5: Ash's Friend Nicki's Favorite Game (34:34) Pokémon Red Version The Vision of Escaflowne ACen Pokémon Blue Version Sega Game Gear MissingNo Sailor Moon Street Sharks Pikachu 6: Brandon's Dad's Favorite Game (39:52) Military Madness TurboGrafx-16 Famicom Wars Advance Wars Neutopia Legend of Zelda series Tetris Spelling Bee Sub Hunt PalmPilot Atari 2600 Circus 7: Frank's Best Friend and Editor Esper's Favorite Game (45:31) Drifting Off with Joe Pera: A Sleep Podcast The Hobbit: An Unexpected Journey (2012) Final Fantasy VII Phil Salvador Sega Saturn Final Fantasy VII: Remake Barrett Wallace Final Fantasy Got It Right With Two Black Fathers Crisis Core: Final Fantasy VII 8: Ash's Friend Will's Favorite Game (55:34) Karazhan Ventrillo Hikikomori Elwynn Forest Dreadsteed 9: Brandon's Friend Scott's Favorite Game (01:04:38) Final Fantasy VI Super Nintendo Entertainment System Record of Lodoss War La Blue Girl Nobuo Uematsu 10: Frank's Friend Lucy's Favorite Game (01:11:42) Mass Effect 2 Pulp Fiction (1994) Fallout: New Vegas 11: Ash's Cousin Jason's Favorite Game (01:24:29) Dragon Ball Z: Budokai Dragon Ball Z Vegeta Dragon Ball Dragon Ball Super 12: Brandon's Former Coworker Christian's Favorite Game (01:32:36) Ys: Book I & II Tengai Makyō: Ziria Bump System The Frog For Whom the Bell Tolls 13: Frank's Wife Amanda's Second Favorite Game (01:38:38) Kickle Cubicle Irem R-Type series Disaster Report series Adventures of Lolo 14: Ash's Husband Travis's Favorite Game (01:42:33) Contra III: The Alien Wars Mana series Castlevania: Bloodlines Contra Hard Corps Weenie Hut Jr's Super Castlevania IV UFO 50 15: Brandon's College Friend and Insert Credit Co-Founder Vincent's Favorite Game (01:47:42) Metaphor: ReFantazio Puyo Puyo Sun Bubble Bobble Puzzle Bobble 16: Frank's Pal Ian's Favorite Game (01:52:31) Killer7 Suda51 Cam Clarke Resident Evil 4 No More Heroes Shadows of the Damned 17: Ash's Sister Alexis's Favorite Game (01:59:53) Uno No Mercy Connect Four Kingdom Hearts Need for Speed III: Hot Pursuit Disney Channel No Strings Attached Katamari Damacy 18: Brandon's Friend Chaz's Favorite Game (02:06:00) Shining Force II N-Gage 19: Frank's Other Friend Ian's Favorite Game (02:10:48) Super Mario World Kaizo Super Mario Maker Disneyland Walt Disney World 20: Ash's Best Friend Abby's Favorite Game (02:17:11) Dragon Age: Inquisition Baldur's Gate series Alistair Sid Meier's Civilization series Cullen Rutherford Miscegenation 21: Brandon's Coworker Tara's Favorite Game (02:28:43) Aces Wild: Manic Brawling Action! NieR: Automata NieR PlatinumGames Inc. Final List (02:28:43) View the finalized list right here! This week's Insert Credit Show is brought to you by patrons like you. Thank you. Subscribe: RSS, YouTube, Apple Podcasts, Spotify, and more!

    Insert Credit Show
    Ep. 424 - Insert Credit Annual #7

    Insert Credit Show

    Play Episode Listen Later Jan 19, 2026 159:35


    As has become tradition, Insert Credit ranks the best games of all time. Hosted by Alex Jaffe, with Frank Cifaldi, Ash Parrish, and Brandon Sheffield. Edited by Esper Quinn, original music by Kurt Feldman. Watch episodes with full video on YouTube Discuss this episode in the Insert Credit Forums SHOW NOTES: Digital Eclipse Final Fantasy VIII “If I die here alone, a miserable death, no one will bother to pick up my bones.” Sonic the Hedgehog Arator the Redeemer World of WarCraft Demonschool Ryu Number #IDARB Street Fighter: 30th Anniversary Collection Aetherdrift Bestow Greatness (the supposed Ash card) Johnny Silverhand Yakuza 0 EarthBound Super Mario Bros. 3 Panzer Dragoon II: Zwei Doom OutRun: Online Arcade Silent Hill 1: Frank's Mom's Favorite Game (08:56) Super Mario series 2: Ash's Cousin Eric's Favorite Game (14:07) Ms. Pac-Man Sega Genesis Mortal Kombat series Pac-Man 2: The New Adventures 3: Brandon's Partner Christina's Favorite Game (19:45) Phoenix Wright: Ace Attorney Nintendo DS Grand Theft Auto series Game Boy Advance Ivy the Kiwi? 4: Frank's Co-host Ash's Favorite Game (25:08) Dragon Age: Inquisition Overwatch Bayonetta Sonic the Hedgehog Part II Sonic the Hedgehog 3 & Knuckles Sega Technical Institute Mark Cerny 5: Ash's Friend Nicki's Favorite Game (34:34) Pokémon Red Version The Vision of Escaflowne ACen Pokémon Blue Version Sega Game Gear MissingNo Sailor Moon Street Sharks Pikachu 6: Brandon's Dad's Favorite Game (39:52) Military Madness TurboGrafx-16 Famicom Wars Advance Wars Neutopia Legend of Zelda series Tetris Spelling Bee Sub Hunt PalmPilot Atari 2600 Circus 7: Frank's Best Friend and Editor Esper's Favorite Game (45:31) Drifting Off with Joe Pera: A Sleep Podcast The Hobbit: An Unexpected Journey (2012) Final Fantasy VII Phil Salvador Sega Saturn Final Fantasy VII: Remake Barrett Wallace Final Fantasy Got It Right With Two Black Fathers Crisis Core: Final Fantasy VII 8: Ash's Friend Will's Favorite Game (55:34) Karazhan Ventrillo Hikikomori Elwynn Forest Dreadsteed 9: Brandon's Friend Scott's Favorite Game (01:04:38) Final Fantasy VI Super Nintendo Entertainment System Record of Lodoss War La Blue Girl Nobuo Uematsu 10: Frank's Friend Lucy's Favorite Game (01:11:42) Mass Effect 2 Pulp Fiction (1994) Fallout: New Vegas 11: Ash's Cousin Jason's Favorite Game (01:24:29) Dragon Ball Z: Budokai Dragon Ball Z Vegeta Dragon Ball Dragon Ball Super 12: Brandon's Former Coworker Christian's Favorite Game (01:32:36) Ys: Book I & II Tengai Makyō: Ziria Bump System The Frog For Whom the Bell Tolls 13: Frank's Wife Amanda's Second Favorite Game (01:38:38) Kickle Cubicle Irem R-Type series Disaster Report series Adventures of Lolo 14: Ash's Husband Travis's Favorite Game (01:42:33) Contra III: The Alien Wars Mana series Castlevania: Bloodlines Contra Hard Corps Weenie Hut Jr's Super Castlevania IV UFO 50 15: Brandon's College Friend and Insert Credit Co-Founder Vincent's Favorite Game (01:47:42) Metaphor: ReFantazio Puyo Puyo Sun Bubble Bobble Puzzle Bobble 16: Frank's Pal Ian's Favorite Game (01:52:31) Killer7 Suda51 Cam Clarke Resident Evil 4 No More Heroes Shadows of the Damned 17: Ash's Sister Alexis's Favorite Game (01:59:53) Uno No Mercy Connect Four Kingdom Hearts Need for Speed III: Hot Pursuit Disney Channel No Strings Attached Katamari Damacy 18: Brandon's Friend Chaz's Favorite Game (02:06:00) Shining Force II N-Gage 19: Frank's Other Friend Ian's Favorite Game (02:10:48) Super Mario World Kaizo Super Mario Maker Disneyland Walt Disney World 20: Ash's Best Friend Abby's Favorite Game (02:17:11) Dragon Age: Inquisition Baldur's Gate series Alistair Sid Meier's Civilization series Cullen Rutherford Miscegenation 21: Brandon's Coworker Tara's Favorite Game (02:28:43) Aces Wild: Manic Brawling Action! NieR: Automata NieR PlatinumGames Inc. Final List (02:28:43) View the finalized list right here! This week's Insert Credit Show is brought to you by patrons like you. Thank you. Subscribe: RSS, YouTube, Apple Podcasts, Spotify, and more!

    RNZ: Morning Report
    New asteroid named after Kiwi photographer

    RNZ: Morning Report

    Play Episode Listen Later Jan 19, 2026 3:59


    An award-winning New Zealand photographer who's had an asteroid named after him for his work in space sustainability says it's a "tremendous thrill." Taranaki Whanganui reporter Robin Martin reports.

    Rusty's Garage
    The Motorsport Brief | Meet Louis Sharp

    Rusty's Garage

    Play Episode Listen Later Jan 16, 2026 17:41


    The 18-year-old Kiwi is back home for an awesome summer of racing before jetting back to Europe for a big year in F3. We spoke with him at Taupo International Motorsport Park ahead of round 2 of CTFROT - Toyota’s ultra competitive single seater series. Louis won the opening race of the season last week as the Next Gen NZ Championship launched into four straight weeks of events. Where he gets his inner drive (lets just say he’s very determined), how he’s framed an at times difficult season in 2025 and some thoughts on the year ahead with Prema. Plus his adrenaline addiction and sky diving while he’s here with last week’s pod guest Ryan Wood. Hope you enjoy getting to know a young racer with a great personality who would love to add his name to the prestigious list of winners of the New Zealand Grand Prix before he jets back out to Europe for the year. Head to Rusty's Facebook, Twitter or Instagram and give us your feedback and let us know who you want to hear from on Rusty's GarageSee omnystudio.com/listener for privacy information.

    Travelers In The Night
    371E-405-Kiwi Nights

    Travelers In The Night

    Play Episode Listen Later Jan 13, 2026 2:01


    New Zealand's 4.5 million people are concentrated in three major population centers which to various degrees suffer from the modern plague of light pollution. However, most of New Zealand's large rural areas and land reserves, covering an area as large as the UK, have unpolluted natural night skies. A completely unique place to experience New Zealand's natural night sky is the Aotea [ Ah - yoh - tee - ah]-Great Barrier Island International Dark Sky Sanctuary. It encompasses New Zealand's, 110 square mile, sixth largest island, which is located about 62 miles from central Auckland. It is easily accessible by boat or a short airline flight. The island's 1000 residents are employed by agriculture and tourism. They value the natural night sky and function without externally generated electricity or street lights and fully support the preservation of their prestine night sky. In the daytime Great Barrier Island offers wonderful beaches and hikes. When the sun sets the night sky becomes alive with its own natural lights. Night sky measurements by Auckland Astronomer Nalayini [ Na - laa - i - ni] Davies and her collaborators have proved that the Great Barrier Island's natural night skies are second to none on planet Earth. Using the unaided eye, a set of binoculars, or a small telescope an observer on the Great Barrier Island is treated to spectacular views of the center of the Milky Way, the clouds of Magellan the nearest galaxies to us, the nearest stars, as well as numerous star clusters, meteors, comets, and other wonders of the natural night sky. Perhaps this unique spot deserves a place on your bucket list.

    Only in Seattle - Real Estate Unplugged
    Malibu's New Owners? Kiwi Billionaires Snap Up $65 million of Fire-Ravaged Lots!

    Only in Seattle - Real Estate Unplugged

    Play Episode Listen Later Jan 13, 2026 25:17


    Two billionaire brothers from New Zealand just scooped up 16 prime Malibu beachfront lots for $65 million—and their genius plan? Ship in prefab homes manufactured in China. What could possibly go wrong? While devastated fire victims navigate California's nightmarish permit process (12-24 months just for approval), these surf-bro billionaires claim they're doing it out of love for Malibu. Sure, guys. Nothing says "community rebuilding" like factory-built Chinese imports on some of America's most prestigious waterfront real estate. The city has issued only 22 permits in Malibu versus 1,300+ in Pacific Palisades, leaving desperate homeowners with few options. Enter the Mobre brothers with their "altruistic" cash offers. Do you trust billionaires claiming they're not in it for profit? Will homes built in 4-6 weeks even meet California building codes? Should iconic American coastline be rebuilt with foreign prefabs while local contractors sit idle? Drop your thoughts below. Like and subscribe for updates on this absolute circus as it unfolds.

    RNZ: Morning Report
    Kiwi teen aims to be youngest to bike Aotearoa trails solo

    RNZ: Morning Report

    Play Episode Listen Later Jan 13, 2026 3:01


    Mahe Braaksma has spent his summer holidays riding more than 3000km across the country on his own, hoping to be the youngest person to complete the tour Aotearoa bike trail solo. He spoke to Melissa Chan-Green.

    Crime Writers On...True Crime Review
    Deep Cover Presents: Snowball

    Crime Writers On...True Crime Review

    Play Episode Listen Later Jan 12, 2026 47:53


    After New Zealander Greg Wards married an American, she convinced him to open a cafe in a resort town. He'd learn that Lezlie Manukian forged bank documents, stole money, and made off with his parents' life savings. Years later, Kiwi journalist Ollie Wards examined his family's efforts to locate Lezlie. Wards picked up the search and discovered a trail of more fraud, cover stories, and victims. “Snowball” is from the Unravel Podcast team at the Australian Broadcasting Corporation, and is being redistributed in the feed for Pushkin's “Deep Cover” series. Part family profile, part shoe-leather investigation, “Snowball” follows Wards' attempt to reconstruct how his family was brought to financial ruin and what happened to the woman who caused it all.OUR SPOILER-FREE REVIEWS OF "SNOWBALL" BEGIN IN THE FINAL 12 MINUTES OF THE EPISODE.In Crime of the Week: We can work it out. For exclusive podcasts and more, sign up at Patreon.Sign up for our newsletter at crimewriterson.com.This show was recorded in The Caitlin Rogers Project Studio. Click to find out more. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mind the Track
    Spinning Yarns with Sugar Bowl CEO Bridget Legnavsky | E78

    Mind the Track

    Play Episode Listen Later Jan 12, 2026 120:18


    In today's world of the mega pass like Epic and Ikon, small independent ski resorts are struggling to survive. But one of the oldest ski resorts in America – in one of the snowiest places on Earth – is thriving. Founded in 1939 by Hannes Schroll and funded by Walt Disney, Sugar Bowl Resort on Donner Summit has welcomed both families and hardcore skiers for generations, offering a friendly, laid back vibe and expert terrain. In Episode 78, we spin yarns with Sugar Bowl CEO, Bridget Legnavsky – a CEO who absolutely shreds on skis – discussing a recent $100 million investment, helping “The Bowl” stay competitive against Epic and Ikon resorts while honoring its blend of European and American traditions. We also chat about why Bridget thinks Sugar Bowl is one of the most unique resorts in the world, the differences between her home country of New Zealand and America, the future sustainability of skiing, why Lake Tahoe isn't more of an international ski destination, and if Summit Chair will spin more than 3 days this year. 2:30 – Recording from Sugar Bowl Resort.4:30 – Last episode, zero snow. This episode, 10+ feet of snow. Instant winter.7:00 – Dangerous snow conditions – lots of avalanches. Inspecting a slide at Latopie Lake near Sonora Pass with Bridgeport Avalanche Center.11:30 – Fatal snowmobile-triggered avalanche on Castle Peak. Reel the program back.14:25 – Ski Patrol fatality at Mammoth Mountain during the post-Christmas storm.15:20 – Telluride ski patrol went on strike and are still on strike.17:30 – Interviewing Bridget Legnavsky, CEO of Sugar Bowl Resort.19:50 – If you're a snowmobiler – get educated. Understand the risks of the backcountry.21:40 – New amenities at Sugar Bowl – new deck, locker room, restaurant and Ratskeller area.25:13 – What are some of Bridget's favorite Kiwi slang words?26:05 – What words do New Zealanders use to describe snow conditions?27:30 – Sugar Bowl community is heavily into performing music.29:20 – Bridget's first winter was the winter of 2022-23, one of the biggest in Sierra history.30:45 – Working as a ski instructor in Japan, Europe, Breckenridge and Aspen, Colorado.33:50 – What makes Sugar Bowl unique in the ski industry?35:30 – How did Bridget find Sugar Bowl?37:30 – The unique structure of Sugar Bowl – owned by homeowners.43:30 – Replacing the village gondola – a $50 million project. 48:30 – Misconceptions about mountain operations and ski patrol.54:04 – The rising operational cost of running a ski resort.58:20 – Are the Vail Epic Pass and Alterra Ikon Pass the biggest threats to the future of Sugar Bowl?1:05:45 – What's the difference between Kiwis and Aussies?1:07:45 – How has the family adapted to American life?1:09:00 – Are there things Americans can take from New Zealand culture and vice versa?1:12:30 – Are you a shoveler or a packer?1:14:30 – What is the vision for the future of Sugar Bowl?1:19:00 – Listener questions: What's up with Summit chair lift?1:21:20 – What is Sugar Bowl doing to keep skiing affordable for families?1:23:30 – Paying more for a season pass but getting a lesser experience.1:25:40 – Why is Lake Tahoe not an international destination ski market? 1:32:00 – Ease of access to skiing in the Tahoe region is second to none.1:35:15 – Where do you see Sugar Bowl in 20 years?1:39:40 – Dope or Derp Sugar Bowl report card.1:48:30 – Why has the season pass purchase turned into a March thing instead of fall?1:50:40 – Does Sugar Bowl have plans to expand mountain bike trails in the summer?1:54:00 – What does Mind the Track mean to you?

    Marvins world
    Spending 30 Years in the Australian Comedy Circuit

    Marvins world

    Play Episode Listen Later Jan 11, 2026 72:03


    An interview podcast giving the inside scoop of what happens in comedy scenes across the globe and dedicated to speaking to the mavericks in the comedy world. Joining us today is Darren Sanders, a veteran of the Australian comedy circuit with over 30 years of stage and screen experience. Known for his sharp observational wit and relaxed storytelling, Darren has hosted five seasons of his own late-night program, The Darren Sanders Show, and appeared in hits like Underbelly and A Place To Call Home. Here is an overview of what we discussed:[[02:07]] Coming from Adelaide and gigging for 33 years [[05:06]][[05:06]] Australia has been influenced from a wide range of comedy styles , some of my comedy influences, my idea of building a set. [[08:56]][[13:52]] My most memorable bomb [[17:37]][[17:48]] How are audiences in Australia different from each other , my experiences of being on cruises [[22:00]][[22:00]] My experiences of cruises [[29:31]][[29:38]] Why I don't do comedy festivals [[31:20]][[33:21]] How my skin has become like a rhino over the years , since running a comedy club and dodgy business practices [[39:13]][[43:22]] Why I don't bother with the circuit and focus on corporate gigs and function [[47:03]][[51:25]] Is Russell Crowe a Kiwi or is he an Aussie?[[52:13]][[01:03:37]] What you need to prepare for when running a gig [[01:10:00]]If you would like to know more on Darren Sanders you can go on his Linktree at https://linktr.ee/darrensanders. You can follow this podcast on Youtube at https://bit.ly/41LWDAq, Spotify at https://spoti.fi/3oLrmyU,Apple podcasts at https://apple.co/3LEkr3E and you can support the pod on:https://www.patreon.com/thecomediansparadise. #standupcomedypodcast #comedypodcast #interviewingcomedians #standupcomedian #australiancomedy

    RNZ: Morning Report
    Iranians in NZ fear for families amid Iran protest crackdown

    RNZ: Morning Report

    Play Episode Listen Later Jan 11, 2026 5:49


    Kiwi-based Iranians are watching with worry as Iranian military forces continue their violent crackdown on anti-government protesters. Forough Amin who has been in New Zealand for 11 years, says she hasn't heard from her family in Iran since the phones and internet were switched off there. She spoke to Melissa Chan-Green.

    RNZ: Morning Report
    Cost-of-living crises through history

    RNZ: Morning Report

    Play Episode Listen Later Jan 11, 2026 3:21


    The phrase "cost of living" has become synonymous with a struggle faced by increasing numbers of Kiwi families just to make ends meet. The news is full of stories about the price of butter, pain at the pump, and pay parity - but it's not a new concern. Our reporter Kate Green takes a dive into the history of tough times.

    RNZ: Morning Report
    Morning Report Essentials for Monday 12 January

    RNZ: Morning Report

    Play Episode Listen Later Jan 11, 2026 26:57


    While parts of the country experienced scorching temperatures yesterday, other areas were hit with heavy rain, thunder and lightning. Napier almost broke a near 50-year record, reaching 36 degrees; New details in the Manage My Health data breach show more than 80,000 of the 125,000 patients affected by the hack are based in Northland; Kiwi-based Iranians are watching with worry as Iranian military forces continue their violent crackdown on anti-government protesters. Forough Amin who has been in New Zealand for 11 years, says she hasn't heard from her family in Iran since the phones and internet were switched off there; The owner of a new supermarket in Christchurch says sales in the first three days were more than double what they expected. Ethan Vickery and his father Shane opened Kai Co to give shoppers an alternative to the Woolworths and Foodstuffs supermarket duopoly; Scrabble boards were put to serious use as New Zealand hosted its first national women's Scrabble championship in Auckland. Twenty competitors battled for the title, with Joanne Craig finishing third after losing her final match to the eventual champion.

    Rusty's Garage
    The Motorsport Brief | Ryan Wood in New Zealand

    Rusty's Garage

    Play Episode Listen Later Jan 9, 2026 13:48


    For our first ep of season ‘26 Rusty catches up with Supercars rising star Ryan Wood in the paddock at Hampton Downs. ‘Woody’ is stepping into an open wheeler over summer and we caught up with him on the eve of round 1 of the Toyota series that’s now badged CTFROT. There is a seriously impressive line-up here trying to follow in the footsteps of Arvid Lindblad who won the title last year on the way to a full time drive in Formula 1. (You can find Arvid’s shortcast ep in our library). The new F1 champ Lando Norris is another former graduate of this series.How Ryan is coping with the different demands of open wheel racing and how the MTEC crew wisely includes some of his colleagues from the Walkinshaw Supercars squad.Plus some thoughts on the new Supra he’ll race in the Supercars Championship this year and the ways Kiwi legend Greg Murphy helps him as he continues to climb the ladder.Head to Rusty's Facebook, Twitter or Instagram and give us your feedback and let us know who you want to hear from on Rusty's GarageSee omnystudio.com/listener for privacy information.

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
    Artificial Analysis: Independent LLM Evals as a Service — with George Cameron and Micah-Hill Smith

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    Play Episode Listen Later Jan 8, 2026 78:24


    Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b

    No Set Path: Entertainment Break-In Stories
    67 - Adapting True Stories for Movies with Austin Kolodney (Dead Man's Wire)

    No Set Path: Entertainment Break-In Stories

    Play Episode Listen Later Jan 8, 2026 94:02


    Austin Kolodney is the screenwriter of “Dead Man's Wire,” based on a true story and directed by Gus Van Sant, in theaters January 2026. He was named one of Variety's 2025 Screenwriters to Watch.A director as well as writer, Austin's work has been featured across major platforms including Funny Or Die, Syfy, Audible, Almost Friday TV, and Comedy Central. His narrative shorts, Two Chairs, Not One and Kiwi, earned prestigious Vimeo Staff Pick honors.Today we get into why Austin walked twenty miles round trip across LA to meet with Werner Herzog; working at the LA Zoo after the industry was ravaged by strikes and the pandemic; how he put himself through USC film school after two years at a community college; and the things he had in place to make reps pursue him for professional relationships that have lasted. SEE “DEAD MAN'S WIRE:” Limited Release 1/07/26 | Out Nationwide 1/16/26KEEP UP WITH AUSTIN: IG: @awwwwstinKEEP UP WITH THE SHOW: All Platforms: @NoSetPathShowwww.bio.site/NoSetPathANDREA'S GAMBIT PREMIERE TICKETS: https://mpi.swoogo.com/andreasgambit-la-premiere/10598619

    RNZ: Morning Report
    Kiwi busking mum and daughter become TikTok hit

    RNZ: Morning Report

    Play Episode Listen Later Jan 8, 2026 5:53


    A Hamilton Mum and her 8 year old daughter have found unexpected social media fame while taking a special road trip this summer. Jessie Matthews and Ally spoke to Melissa Chan-Green.

    Todd N Tyler Radio Empire
    1/6 3-2 Kiwi Hookers

    Todd N Tyler Radio Empire

    Play Episode Listen Later Jan 6, 2026 10:22


    Totally legal!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    RNZ: Morning Report
    Chatty tui Pleakley surprises visitors at Otorohanga Kiwi House

    RNZ: Morning Report

    Play Episode Listen Later Jan 6, 2026 3:50


    A cheeky tui named Pleakley is delighting visitors at the Otorohanga Kiwi House by calling out and laughing at passersby. Taranaki Whanganui reporter Robin Martin went to meet the attention-grabbing bird. RNZ's Taranaki Whanganui reporter Robin Martin met the bird.

    Zum Scheitern Verurteilt
    Außenreporterin für Kiwi

    Zum Scheitern Verurteilt

    Play Episode Listen Later Jan 3, 2026 64:51 Transcription Available


    Da haben wir es wohl alle falsch gemacht. Oder richtig? Oder eh überall unterschiedlich? Kommt mehr als Käse in ein Pfännchen? – Und wo wir schon beim Essen sind: Die „picky-Eaterin“ von den beiden ist das gar nicht absichtlich. Und hat einen Namen, quasi eine Diagnose, in der sie sich wiederfindet. Außerdem wird Silvester Revue passieren gelassen. Vom Görli über das Ferienhaus in den Bergen und Raclette bis zur ZDF-Show und der Frage, wann und wie man für Letzteres aufgebaut werden kann. Das Ferienhaus in den Bergen war von innen übrigens ein Chalet mit Tücken. Außen aber „hui“, mit einer Sicht, die sich gewaschen hat. Schreibt doch mal wieder: hallo@zsvpodcast.de Unseren Instagram-Account findet ihr hier: https://www.instagram.com/zsvpodcast Und hier geht's direkt zu TikTok: https://www.tiktok.com/@zumscheiternverurteilt Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/zumscheiternverurteilt Du möchtest Werbung in diesem Podcast schalten? Dann erfahre hier mehr über die Werbemöglichkeiten bei Seven.One Audio: https://www.seven.one/portfolio/sevenone-audio

    The Mike Hosking Breakfast
    Best of 2025: Jimmy Carr talks comedy, upcoming tour on the Mike Hosking Breakfast

    The Mike Hosking Breakfast

    Play Episode Listen Later Jan 1, 2026 12:22 Transcription Available


    "Such a joyful thing": Jimmy Carr talks comedy, upcoming NZ tour Jimmy Carr is well known for a couple of things, his controversial comedy and distinctive laugh chief among them. And he's bringing both to Kiwi audiences early next year, travelling right across the country, stopping in 13 different cities. He's got a prolific career in standup, as well as being a household name in UK television, not only hosting an array of panel shows, but a regular guest on many of the rest. Carr has a busy schedule, and he told Mike Hosking that he works as much as he possibly can, as his work is such a joyful thing. “If I have a night off, what am I doing? I'm sitting at home having my tea,” he said. “If I come out and do a show, it's such a joyful thing." “I also think I do have a propensity to get cancelled once in a while,” Carr confessed, the comedian having seen his fair share of controversies. “So you never know when your last one's going to be.” When it comes to cancel culture, Carr is a big advocate for freedom of speech. “I'm not for everyone, and edgy jokes, there's you know, limits of it, sometimes it's not for everyone,” he told Hosking. “But the whole cancel culture thing, you go, well, as long as you don't get cancelled by your own audience, I think you're golden.” LISTEN ABOVE See omnystudio.com/listener for privacy information.

    Off The Podium
    Episode 522 - Lisa Carrington Interview

    Off The Podium

    Play Episode Listen Later Jan 1, 2026 68:42


    When it comes to the debate over New Zealand's greatest Olympian, one name rises above the rest: Lisa Carrington. The kayak sprint superstar has rewritten the record books with eight Olympic gold medals and one bronze, showcasing unmatched skill, power, and longevity on the world stage. In this episode, we're thrilled to sit down with the Kiwi icon herself to dive into her incredible journey — from her breakthrough at her very first Games in London, to the life-changing highs of Paris, and now her sights firmly set on a fifth Olympic appearance in Los Angeles 2028. Lisa shares how her training is evolving, where she actually keeps all those medals, and why she and Usain Bolt are basically the same person. We also chat about the perks of having a street named after you, being made a Dame, and, of course, the very special “Colin” in her life (spoiler: he might just be cuter than ours). It's an inspiring, funny, and unmissable conversation with one of the greatest athletes of all time.

    Outside/In
    Return of the Kiwi Apocalypse: 10 years of Outside/In

    Outside/In

    Play Episode Listen Later Dec 31, 2025 32:09


    ** We're celebrating our 10 year anniversary and want you to come! Join us in Portsmouth, New Hampshire for a night of storytelling, featuring former Outside/In guests and hosted by our very own Nate Hegyi. Get your tickets here! ** In celebration of Outside/In's 10th anniversary we're looking back at our very first episode: “The Kiwi Apocalypse,” first published in December of 2015. Afterwards, we'll get an update to the story and talk about how weird it is to have a podcast old enough to be in middle school. Here's our original description for The Kiwi Apocalypse: Iago Hale has a vision: it's one where the economy of the North Country is revitalized by local farmers selling delicious cold hardy kiwi berries to the masses.Meanwhile, Tom Lautzenheiser has been battling a hardy kiwi infestation in Massachusetts for years, and is afraid that this fight will soon be coming to the rest of New England.Should we worry about the cold hardy kiwi and what does the quest to bring it to market tell us about what an invasive species is?Featuring Iago Hale, Tom Lautzenheiser and Bryan Connolly.This episode was produced by our original host, Sam Evans-Brown. For full credits and transcript, visit outsideinradio.org.SUPPORTGrab a ticket for our 10 year anniversary live show here! Outside/In is made possible with listener support. Click here to become a sustaining member of Outside/In. Follow Outside/In on Instagram or join our private discussion group on Facebook. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    The Tranquility Tribe Podcast
    Ep. 411 OB Tools that Parents Need to Know About with Dr. Tori O'Daniel from Laborie

    The Tranquility Tribe Podcast

    Play Episode Listen Later Dec 31, 2025 86:32 Transcription Available


    In this episode, HeHe is joined by Dr. O'Daniel to break down the real tools and procedures used in labor and delivery, the ones no one explains until they're suddenly happening to your body. Together, they unpack what tests like the ROM actually tell us about water breaking, when internal monitors like an IUPC are used, and what patients deserve to know before anything is placed inside their body. They also dive into operative vaginal deliveries, including vacuums and forceps, how clinical decision-making works in those moments, and why true informed consent matters so much when things move quickly. Dr. O'Daniel explains newer innovations like the Traxi for safer C-sections in larger bodies and the Life Bubble, a game-changing tool for supporting NICU babies. This conversation is evidence-based, honest, and incredibly empowering, especially if you want to walk into birth understanding the tools, not fearing them. Knowledge is advocacy, and this episode gives you plenty of both.   Guest Bio: Dr. Tori O'Daniel is a Board-Certified OB/GYN whom has been practicing for 14 years. For the past 11 years she has been an OB/GYN Hospitalist at Mercy Medical Center in Oklahoma City, Oklahoma. Dr. O'Daniel is the Medical Director of the OB/GYN Hospitalist program and the Department Chair of the OB/GYN Department in her facility. She also instructs educational classes and facilitates the OB Emergency Simulations for the nurses and physicians within her department.   She has been actively involved in the Society of OB/GYN Hospitalists (SOGH) for the past several years. She currently sits on the SOGH Board of Directors.   Dr. O'Daniel is passionate about education, and she actively teaches in multiple venues. She is a master trainer for Kiwi vacuum assisted deliveries; she travels across the globe to train residents and attending physicians in the 5-Step Vacca Method.   laborie.com   Check out the tools Dr. O'Daniel shared about here: https://www.laborie.com/products/obstetrics-gynecology/   SOCIAL MEDIA: Connect with HeHe on Instagram  Connect with Laborie on IG    BIRTH EDUCATION: Join The Birth Lounge for judgment-free, evidence-based childbirth education that shows you exactly how to navigate hospital policies, avoid unnecessary interventions, and have a trauma-free labor experience, all while feeling wildly supported every step of the way Want prep delivered straight to your phone? Download The Birth Lounge App for bite-sized birth and postpartum tools you can use anytime, anywhere. And if you haven't grabbed it yet… Snag my free Pitocin Guide to understand the risks, benefits, and red flags your provider may not be telling you about, so you can make informed, powerful decisions in labor.

    The Brew Happy Show
    A Holiday Special (with Kiwi) Part 1

    The Brew Happy Show

    Play Episode Listen Later Dec 30, 2025 45:03


    It's the holiday time of year and we like to thank all of our friends, listeners, and Patreon supporters, by inviting them to join us as we raise a glass to them. With some recognizable classic beer, and friendly cheer, you may want to sample some of this selection yourself. Featuring special guest John the Kiwi, on this episode of Brew Happy!

    The Holistic Nutritionists Podcast
    #204 We Can Do Hard Things - Nat's Kiwi Adventure

    The Holistic Nutritionists Podcast

    Play Episode Listen Later Dec 30, 2025 51:04


    What happens when you throw hiking boots, New Zealand landscapes, a menstrual cycle, and a suitcase full of supplements into the mix? This episode

    826 Valencia's Message in a Bottle
    King Carl and Kiwi Find the Golden Marshmallow by Lillian

    826 Valencia's Message in a Bottle

    Play Episode Listen Later Dec 29, 2025 2:28


    King Carl and Kiwi Find the Golden Marshmallow by Lillian by 826 Valencia

    The Property Academy Podcast
    “Buy The Worst House On The Best Street” – Is That Good Advice?⎥Ep. 2294

    The Property Academy Podcast

    Play Episode Listen Later Dec 22, 2025 13:03


    “Buy the worst house on the best street” – classic Kiwi advice … but is it actually good in 2025?In this episode, Ed and Andrew delve into the data behind this old property mantra, revealing when it works brilliantly and when it can turn into a costly mistake.You'll learn:Whether the “worst house, best street” strategy still stacks up todayWhat the numbers say about capital growth in top suburbs vs cheaper areasWhen this strategy works, and when it's a bad idea for long-term investorsThis episode highlights situations where buying a fixer-upper makes sense … and why many investors are better off choosing properties with stronger yields and fewer maintenance headaches.Don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠create your free Opes+ account and Wealth Plan here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.For more from Opes Partners:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Sign up for the weekly Private Property newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠

    Talkin' Birds
    #1,069 Dec. 21, 2025

    Talkin' Birds

    Play Episode Listen Later Dec 21, 2025 30:00


    On our latest show (#1,069 Dec. 21): The five Kiwi species of New Zealand; Mike O'Connor on mixed winter bird flocks; and Freya McGregor on how she's helping make birding accessible to all.

    Oldie But A Goodie
    #340: Kiwi Christmas (2017)

    Oldie But A Goodie

    Play Episode Listen Later Dec 21, 2025 83:24


    For our final episode of the year, we're turning the holiday vibes up to 11 and heading to Zach's home country for Kiwi Christmas! In this movie, Santa is over the commercialisation of Christmas, so much so that he runs away to New Zealand and crashes a family's summer camping holiday. Does this movie contain the classic New Zealand comedy we both love? Have we finally found a relatable Christmas movie set during summer? Does this one have any weird Santa lore that we spend way too long dissecting?Join our Patreon for our bonus episodes! https://www.patreon.com/oldiebutagoodiepodFollow the show!Instagram: https://www.instagram.com/oldiebutagoodiepod/Facebook: https://fb.me/oldiebutagoodiepodPodcast Platforms: https://linktr.ee/oldiebutagoodiepodGot feedback? Send us an email at oldiebutagoodiepod@gmail.comFollow the hosts!Sandro Falce - Instagram: https://www.instagram.com/sandrofalce/- Twitter: https://twitter.com/sandrofalce- Letterboxd: https://letterboxd.com/SandroFalce/- Twitch: https://www.twitch.tv/SandroFeltChair- TikTok: https://www.tiktok.com/@sandrofalceZach Adams - Instagram: https://www.instagram.com/zach4dams/- Twitter: https://twitter.com/ZackoCaveWizard- Letterboxd: https://letterboxd.com/zach4dams- Twitch: https://www.twitch.tv/zackocavewizardWatch our editor, Starkie, on Twitch! https://www.twitch.tv/sstarkieeOldie But A Goodie's theme tune is written and produced by Josh Cake. Check out his work here: https://www.joshcake.com/Check out other shows from our network 'That's Not Canon'! https://thatsnotcanon.com/ Hosted on Acast. See acast.com/privacy for more information.

    Fletch, Vaughan & Megan on ZM
    Fletch, Vaughan & Hayley's Big Pod - December 17th 2025

    Fletch, Vaughan & Megan on ZM

    Play Episode Listen Later Dec 16, 2025 81:47 Transcription Available


    On today's episode of the Fletch, Vaughan & Hayley Big Pod, Fletch brings in lucky eggs to foresee what 2026 has in store, but will everyone be lucky... Kiwi's sexiest accent Flight attendant reveals that biggest reason people get pissed off at them Top 6 - Things at a supermarket owned by a 29 year old Personality traits of a people who walk fast Why Christmas is killing your sex drive Top Gen Z stains SLP - Is matching outfits with your partner cute? What is too spicy for you? Sarah name is popular again Fallout Interview Fletch has lucky eggs Fact of the day What is the petty reason you don't talk anymore? See omnystudio.com/listener for privacy information.

    RNZ: Checkpoint
    Kiwi cricketers vying for a life changing pay day

    RNZ: Checkpoint

    Play Episode Listen Later Dec 16, 2025 4:34


    Several Kiwi cricketers are vying for a life changing pay day this evening. A host of Black Caps have entered the Indian Premier League auction in Abu Dhabi, giving themselves a chance to become overnight millionaires. Sports reporter Jonty Dine spoke to Lisa Owen.

    RNZ: Nine To Noon
    What's driving New Zealand's gaming development boom

    RNZ: Nine To Noon

    Play Episode Listen Later Dec 15, 2025 14:35


    What's driving the year-on-year successes of Kiwi game developers?

    #AmWriting
    How to Write the Book Only You Can Write

    #AmWriting

    Play Episode Listen Later Dec 12, 2025 25:34


    Rachael Herron's latest: The Seven Miracles of Beatrix Holland, is, truly and in so many ways, the book only she can write. It pulls from every part of her life: identity, spirituality, a love of what's magical in the world, her joy in crafting and her understanding of community and family. I, of course, wanted to know: how did you find the guts to put it all on the table? We talked about vulnerability, the challenges of writing the book of your heart, and learning to play with what you fear. Rachael says, “I'm spoiled for any smaller kind of writing. I'm not sure I can go back.”You're gonna love it. Links from the Pod:The Seven Miracles of Beatrix HollandInk in Your Veins podcastRachel's website: https://rachaelherron.comThe Jennifer Lynn Barnes “take my money” list.The War of Art, Steven Pressfield#AmReading:Careless People, Sarah Wynn-Williams This Is Not a Book About Benedict Cumberbatch, Tabitha Carvan Transcript below:EPISODE TRANSCRIPTMultiple SpeakersIs it recording? Now it's recording—yay. Go ahead. This is the part where I stare blankly at the microphone. I don't remember what I'm supposed to be doing. All right, let's start over. Awkward pause. I'm going to rustle some papers. Okay, now—one, two, three.KJ Dell'AntoniaHey, listeners, this is the Hashtag AmWriting Podcast, the place where we help you play big in your writing life, love the process, and finish what matters. I am KJ Dell'Antonia, and today I am bringing to you an interview with Rachael Herron. I just finished talking to Rachael, and I really enjoyed this. We talked about vulnerability. We talked about the challenges of writing the book of your heart. We talked about what should show you where that book is, the idea that the fear is where you should play. It's, it's a really great interview, and I know that you are going to enjoy it.Let me tell you a little bit about Rachael. She is the author of so many, so many books, thrillers and romances, and most recently, in the book that we are talking about, The Seven Miracles of Beatrix Holland. And I have to read you—Rachael's going to describe this to you, but I got to read you the very short thing that basically made me say, take my money. And it went like this. A psychic tells Beatrix Holland that she'll experience seven miracles and then she'll die. No problem, though, Beatrix isn't worried. She is above all things pragmatic. She vastly prefers a spreadsheet to a tall tale. Then the miracles start to happen.It's a really great book, and more importantly, it's a big book. It is a book where Rachael is writing what comes from deep inside, and it is a book that only Rachael could write. And that is why I asked Rachael to join me today. I hope that you enjoy this interview, and before I release you to it, I just want to remind you that the place to go to talk more about writing big and playing big in your writing life is anywhere that we are: the AmWriting Podcast, Hashtag AmWriting, AmWritingPodcast.com. Find us on Substack. Find us by Googling. Grab those show notes—you should be getting them—and join us for all the different ways that we need to come together in a community to give each other the strength to do our very best and biggest work.So I'm going to ask you to describe The Seven Miracles of Beatrix Holland to me. But also before I even do, I want to say how much I enjoyed it. And also so we have been spending most of our time on the AmWriting Podcast lately talking about writing—writing big and striving big and trying to do something different and bigger and better than what you have done before. We, I think as writers, we're always trying to up our game, but there's upping your game, and there's reaching for the stars. And I felt like this book reached for the stars in a way that you maybe didn't even set out to because to me, as someone who has read much of your work and followed your career and listened to a lot of the Ink in Your Veins Podcast and sort of just knows what's going on with Rachael, this is the book that only you could write. So when I say this is your big book, I don't mean, you know, that this is, is going to be a—I'm sorry—I don't actually mean that 200 years from now, people will be passing this around.Rachael HerronExactly.KJ Dell'AntoniaWhat I mean is that this is you. This is and it's you. All of your books are you, but this was really you in a way that felt downright magical to me. And it's a magical book. So can you tell us a little bit about Beatrix Holland? And I will also say that even before I read it that you had me at the premise. So give us that.Rachael HerronWell, I don't know how to talk about it now that you've talked me up so well. But thank you. Thank you for, you know, being honestly an ideal reader for this book. The Seven Miracles of Beatrix Holland is about a woman who is pragmatic and sensible and doesn't believe in, you know, mumbo jumbo, not really worried about that kind of thing. But she is told by a psychic that she will experience seven miracles and then she will die and whatever, that's not a big deal. It doesn't bother her, because none of it is true. She doesn't believe it. And then, me… miracles start to occur; things that even she cannot say are not miracles. And so therefore, maybe, what about that death thing that's going to be preying on her mind?KJ Dell'AntoniaSo on top of that…Rachael HerronWho likes what the book is about…KJ Dell'AntoniaWe're on an island, and there's family secrets being revealed. And there are amazing family secrets that I think many of us would, I mean, they're kind of awful, and I've talked to some people, and some people would be thrilled by them, and some wouldn't, but yeah, just it just kind of keeps giving and giving and giving. And it's funny because you say I'm the ideal reader, and actually, I don't know that I necessarily would be…Rachael HerronOh, that's even better…KJ Dell'AntoniaExcept, if somebody else had written this, I would not be the ideal reader. And I don't think that's because I know you. I think it's because of the way that you wrote that. And when what I when I say, I wouldn't be the ideal reader, I am getting a little tired of books that are giving me certain specific elements that are very trendy right now and that people feel obliged to give me. And you know you have, certainly, you've got LGBTQ characters in this, but also you have LGBTQ characters in your life. You are yourself such a character.Rachael HerronAs my wife is one of them over in the other room.KJ Dell'AntoniaAnd this isn't me saying I will only read books about queer people by queer authors. No, no, no. It's that these are the thing, the elements of this book that sort of fall into that, that are just there, because that's your life and what you see…Rachael HerronRight. Right.KJ Dell'AntoniaAnd it just is perfectly natural. And of course, you have a lot of—and it's in the sort of the same way that, of course, there's a lot of witchiness and spirituality, because it's part, it's part of you and part of who you are. So it's, it's, it reads as authentic.Rachael HerronOh, that's such a, that's such a—that's such a huge compliment. I wrote this book to please myself.KJ Dell'AntoniaThat's what… that's my next question. Don't make me. Don't make me interrupt you. What? That was my question. What was your intention? What did you set out to do with this book?Rachael HerronI—so this is my sixth genre, and I've been writing for—I've been published for 15 years, and this is my 26 or 27th book. I've lost, I can't remember, maybe more. I have a list somewhere. And I have always thought about, you know, the market and what people want to read and what people want to hear, as you know, as you know this, you've been, you've been doing the same thing a long time.KJ Dell'AntoniaAnd there's nothing wrong with that.Rachael HerronThere's nothing wrong with writing tree, market around market, exactly. But, but in this case, I wanted to write a book, and I wanted to have fun, and, and, and to be honest, I talk about this regularly is that I was going to self-publish it. I didn't even want to deal with my agent coming back and saying, oh, you should edit it this way. Or, you know that this or that editor doesn't want it, or they wanted to change in some way. I wanted to write a—I wanted to write a series of about found family, and I did, I did the Jennifer Lynn Barnes thing, the adored Taylor, where I just, I just made the list of everything I love the most. You know, I love witch stuff. I love practical magic. I love sisters. I love twins separated at birth. Why wouldn't I? I love grumpy, grumpy, older women and fireflies and all of the things that I love the most. And I and I wrote that book, and it was one of the fastest books I've ever written, and not because I was rushing, just because it came easily. I was following my heart and following my gut, and I was also following my tarot cards. When I would get stuck, I would just pull a tarot card and see what it did with my subconscious and moved me forward, and I it was just play. And then I revised it quickly. I hired my favorite editor, edited it, got it copy edited, and then I decided, oh gosh, I don't think I want to do a whole series, and I'm not sure if I want to self-publish, because that's a lot of work, so I'll just let my agent have it and to see if she could sell it. And she said, okay, I'll take a look at it and see if I could sell it. And then it sold at auction because it was, I don't… there's no because there it was just no surprise. There's no because there's no because there's never a because in publishing. You can also write the book of your heart.KJ Dell'AntoniaYeah, and then this—the rest of the story wouldn't fall that way and it would never sell that way…Rachael HerronExactly. So it happened to go this way. And of course, a lot of it is a lot of it is luck. Cozy, cozy, queer fantasy is, you know, on an upswing right now, but that wasn't, you know, a couple years ago. It took a couple years for it to come out.KJ Dell'AntoniaWhat do you love most? Yeah, what do you love most about this book and the experience?Rachael HerronThe thing I love most about the whole experience is that it has spoiled me for any other kind of writing; I think now, which may be a good or a bad thing. Ask me in a few years. But I kind of refuse now to write a book that I don't desperately want to write, that I can't stop thinking of. Because I've written a lot of books that I love, but they were, you know, what they were, they were my job. They were the book I sold. And now I will write the book that I sold. Now I will do, do what the contract says. And I don't want to do that anymore. I just want to write the books that grab me and fascinate me and keep me in their thrall and what that means is that I have to, you know, focus on other ways to bring in money and to support. And really, I'm now, I'm supporting this writing passion with things like teaching and with, you know, you know, old backlist books. But I'm not, I'm not sure if I can go back. I don't want to, I don't want to be a work a day writer, writing to a contract that I don't maybe love as much as other contracts I've had, right?KJ Dell'AntoniaYeah.Rachael HerronSo, yeah, it's spoiled me a little bit that way.KJ Dell'AntoniaSo are there other ways that this book feels bigger than things that you have written before? And this is again; we're not denigrating our old work. We're not…Rachael HerronNo, of course not. Of course not. I think that every—for me, it's always been a goal that for every book that I write, it needs to be me playing bigger. It needs to be me playing truer, more, more free. And in this book, it's only recently come up in my in my consciousness that I think that I needed to leave the United States and move around the world to New Zealand. And one of the reasons we left the states was because we were scared of the way LGBTQ rights are, are trending. There's 867 pieces of legislation that are anti LGBTQ on the dockets right now in the United States, and that's, that's up by like 700% in the last four years, and it's and it's terrifying. But it I didn't strike me until recently that this is my first novel that has a queer love story. It's not a romance, but there's a queer, queer love story inside it. And I finally, perhaps, felt safe enough to do that, you know, because it and when I came into the industry, I came in writing straight romances, because that's what would sell. And when I would ask to write other things that was turned down by traditional publishing because they thought it wouldn't sell. And then, you know, obviously self-publishers came along and said, oh, there is a market. Wow, look who wants to read these books. But, and so it was me kind of exposing myself in that way, and also me exposing myself in in the way that Beatrix does is that I always, I also just want to believe in magic. I want to believe I want to believe in things out there that I can't explain, that are bigger than me, that I don't actually need a name for or to understand. Because if I could understand something that is that big, something that is powering the universes, I can't be expected to understand that. But can I, can I engage with it? Can I play with it in the in the exact same way that that Beatrix does? I think the answer is yes. And I did. When I would pull the tarot cards to help me write the next chapter if I got stuck, it was an actual process of engaging with a larger thing, saying, I don't know how to write this book. Help me write this book. Asking for help in writing this book from, from whatever is out there. I don't have, I don't have big ideas about it, but yeah. So that was, that was, it was scary, and maybe that's why I originally wanted to self-publish it, because then it, it felt like I could keep total control.KJ Dell'AntoniaSure.Rachael HerronIf I did that,KJ Dell'AntoniaOf course, you could keep anyone who wouldn't like it from reading it then.Multiple Speakers[Both laughing]KJ Dell'AntoniaYeah, okay, so maybe not so much. But no, I get it. It must have felt…Rachael HerronYeah.KJ Dell'AntoniaLess vulnerable. So I was going to ask you next, what was hard about it. And I guess that's, is that what was hard? But maybe something else was.Rachael HerronLet's see, what was that? So that was hard, being that honest and vulnerable. And you know how when we write our novels, the thing that we want to do is be as truthful as possible, even though we're just making up a pack of lies. It's it feels more true often than even memoir can when we're when we're doing this. What else felt hard? Not much felt hard about this book. And I have had books that I have struggled with like I am wrestling muddy alligators for decades at a time. It feels like those that's what those that's what those books feel like. And there's nothing wrong with those books. They were just; you know where I was at the moment. But this book, I it's one of those gift books. It just, I must have struggled, and I do not remember. I honestly do not remember struggling.KJ Dell'AntoniaWell… I wish for…Rachael HerronI just remember it being joy.KJ Dell'Antonia…all of us. I wish that. I wish that journey for all of us. Oh. Yeah, yeah…Rachael HerronAs usual, I struggle whenever I get copy edits back. When I get copy edits back, I realize I don't know how to write a sentence.KJ Dell'AntoniaSo if any of our listeners are sort of trying to find within themselves the freedom to write what they really want to write, and maybe can't even figure out what the heck that would be, what would you say to them…asking for a friend?Rachael HerronI would encourage them to do one of those “ID lists”, to sit down and write a list of the thing that if you saw that something about it was on the box of the of the video cassette at the video rental store, because that's how old I am, if you saw that listed on there, would you pick it up and rent the movie? Write down all of the things that you love the most and then actually use it as an exercise in creativity within constraints. How many of those things can you actually shove in there? Can you get them? Can you get them all in there? The other thing I like to ask myself when this question comes up is, if I am alone—well, it doesn't actually matter if I'm alone or not—but if I, if I walk into the bookstore, any bookstore, and and I reject any “shoulds,” you know, should I look for that cookbook I was thinking about, or should I look for that new nonfiction I heard about on the podcast, if I'm if I'm released of all shoulds, where will I want to—and say somebody tells me you can only look at one section of the store today. What is the section of the store that I will go stand in front of and pull books off the shelf and look at? And perhaps that is a clue as to where you should be writing.KJ Dell'AntoniaAnd how about freeing yourself up to actually do it. We can't all move to New Zealand, Rachael.Rachael Herron[Laughing] Freeing yourself up do you mean to write the book, to write that book?KJ Dell'AntoniaTo write that book. I don't. Yeah, most of my listeners—well, most of our listeners aren't you know, we tend to be a podcast for professionals or people that are playing professional so, you know, these aren't people who can't put their butt in the chair, but to be vulnerable and admit that you want to go bigger and then do it. That's a different question. Got any advice for that?Rachael HerronI do like to think of Steven Pressfield's advice from his book The War of Art, where he talks about resistance with the capital R. And the place where you feel the most resistance, that's your that's your compass that is pointing north to what you what, what you are meant to do. And a lot of times when we think about these bigger stories that we may want to write someday, the someday, right when I get there, I'll write it someday, that you've already got this compass pointing you there, and it is terrifying. And the fear of how can I do that now is maybe the thing that says that you do not need to put aside the fourth book in the series that you're writing that you need to finish before you write this next series. You can do that. But maybe listening to that resistance, listening to that fear, and dedicating 15 minutes, three times a week, to playing with the idea of this book. If you were to start to write it anytime in the future, you can, you can at least be courting it and flirting with it, making it know that you are going to be available to write that, that book of your heart, because everybody, every we all need that. We all need that. We also need to pay the bills and do the professional writing and do all that too.KJ Dell'AntoniaYeah, yeah.Rachael HerronBut…KJ Dell'AntoniaWe got to; we got to try to do the biggest things we can. All right. Well, that's a great place to lead into my next question, which is, what have you read recently where you really thought the writer was playing big?Rachael HerronCan I give you two?KJ Dell'AntoniaOf course!Rachael HerronOkay, the first one, and strangely, these are both nonfiction. So make of that what you will, Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism by Sarah Wynn-Williams, who is a QE. Have you heard of this one?KJ Dell'AntoniaOh yeah. This is the…Rachael HerronOh yeah, the Facebook book.KJ Dell'AntoniaThe Facebook book. We moved fast, and we did indeed break things.Rachael HerronWe did move fast. We broke things. And Sarah has a uniquely Kiwi sense when she's looking at them, because she goes in and she's really watching it all happen. And I don't care about Facebook. I don't actually engage with all of the stuff that said about it. And this book is written basically it felt like a thriller. It was—I couldn't put it down. And she was fearless, the things that she said. No wonder Zuckerberg wanted to silence it. He looks like a moron. And she was absolutely fearless. And it was one of those schadenfreudy, why am I reading this? Why can't I put this down? But I can't put it down. And I think it was because of her bravery.KJ Dell'AntoniaYeah.Rachael HerronSo I really enjoyed it for that. And then the other one I want to tell you about is kind of on the flip side. And you may not have heard about this one. It's called This Is Not a Book About Benedict CumberbatchKJ Dell'AntoniaNot only have I heard about this one, it's entirely possible that I sent it to you.Rachael HerronReally?!KJ Dell'AntoniaI love this book! All right, go on. Go on.Rachael Herron…The Joy of Loving Something--Anything--Like Your Life Depends On It, by Tabitha Carvan. Oh, my god, isn't it brilliant? She writes about how, yes, she does love Benedict Cumberbatch, who I'd really never considered very much in my lifeKJ Dell'AntoniaNo, I couldn't pick him out of a lineup of youthful-ish…Rachael HerronYeah.KJ Dell'AntoniaBritish-ish…Rachael HerronYeah.KJ Dell'AntoniaActor-ish,Rachael HerronAnd she loves him, loves him, loves him, no, no joke, loves him. And the whole book is about recovering from any shame around loving the thing that you were put on this earth to freaking love with your whole heart, no matter what anybody says. And I really think the Benedict Cumberbatch is a really great thing to tie this whole book in.KJ Dell'AntoniaIt had to be something like that, because if it was like knitting, I mean,Rachael HerronRight, exactly.KJ Dell'AntoniaOkay, that's fine, honey, you can love your knitting. And you know it also is…Rachael HerronExactly,KJ Dell'AntoniaYou know, it also is…Rachael HerronThis is not a book about yogurt. Who cares, you know. But Benedict Cumberbatch is funny to say. He's actually kind of funny to look at when you do look at him, when you do look him up. And it's so evocative, and it is, and it is something that people would snicker at.KJ Dell'AntoniaYeah.Rachael HerronRight? People would snicker.KJ Dell'AntoniaStill even… yeah, it's like, she snickers it herself. But also she's like, okay, why? Why is that, you know? Why would it be? What if I were super obsessed with the stats of some obscure ball—baseball player, no one would mock that. If I wanted to watch every football game played by, you know…Rachael HerronThat blew my mind when she said that, of course, of course. So, and she goes deep. She's again, she's so brave. She plays big. She goes into what it means. How does it like? How does it affect her husband? What does she think about how it affects her husband? Like she goes all of the places. I'm so, I bet you did tell me about it, and I'm so glad that you did.KJ Dell'AntoniaI love, I love. I keep extra copies to force people to read it. I tie people up in like, you know parts of my house and force them… no. I don't really do that.Rachael Herron[Laughing] I love that. But, and what are those all have in common? I think that what are, the both those books have in common? Is these women who, who, at any point, anybody in the whole world could have told them that's not really a good idea to write.KJ Dell'AntoniaYeah, no, that's exactly right.Rachael HerronAnd it would've been true.KJ Dell'AntoniaYeah. It would have been true. It would have been excellent advice.Rachael HerronExcellent advice not to write that book.KJ Dell'AntoniaReally, you should not admit that you love Benedict. Or really, I mean, you're never going to work in this town again, man.Rachael HerronYou're never going to work in this town again. And the whole, during the whole book of Careless People, she's talking about being inside, she is inside the beast that is doing the damage. And that's and that's brave too. And I don't think Seven Miracles is as brave as those books, but there was, but there was bravery and resistance around moving, moving toward, really putting yourself on display.KJ Dell'AntoniaRun towards the fear.Rachael HerronAnd that's what we writers do.KJ Dell'AntoniaThat's our theme.Rachael HerronYeah, run towards the fear. Even if you can only give it 15 minutes a day or so, three times a week, that's enough. That's good enough to tell your bravery. It should come back more.KJ Dell'AntoniaYes.Rachael HerronScooch, door bravery, little scooches.KJ Dell'AntoniaEdge towards the fear. Tip toe.Rachael HerronOh, that's beautiful. I love that you're doing this series.KJ Dell'AntoniaWe love it too. So, yeah, it's going great. Well again, thank you. I was really excited to talk to you about this book. I was really excited to read this book. I enjoyed the heck out of it, and I think, listeners, that you would too. You should absolutely check it out as well as all the rest of Rachael's work. Links of course, as always, in the show notes, and follow Rachael in all the places. Although, to me, the best thing to do is to go and listen to the Ink in Your Veins Podcast. Because obviously, people, you're a podcast listener, you wouldn't be here. Where do you most like to be followed, Rachael?Rachael HerronAt Ink in Your Veins or on Rachaelherron.com/write, if you are a writer and want to get on the on the writing encouragement list. But I just want to thank you for doing this amazing show and for having me. I feel very, very honored to be here.KJ Dell'AntoniaWell, thank—thank you. All right. And as we say in every episode, until next week, kids, keep your butt in the chair and your head in the game.NarratorThe Hashtag AmWriting Podcast is produced by Andrew Perrella. Our intro music, aptly titled Unemployed Monday, was written and played by Max Cohen. Andrew and Max were paid for their time and their creative output, because everyone deserves to be paid for their work. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit amwriting.substack.com/subscribe

    Fate of Isen: A Kiwi D&D Podcast
    Book 2 Ep71: Sky Boars and Sky Lores

    Fate of Isen: A Kiwi D&D Podcast

    Play Episode Listen Later Dec 11, 2025 85:26


    A surprisingly deadly and chaotic fight ensues in the Sky barn, leading Granny to encounter a strange otherworldly figure during a near-death experience. Will this be the end of the Sidebars???Featuring:Erika Jayne as Taryn GrimSeverin Gourley as Dexter ClementineKasia Wayfinder as Granny Sabinkaand Julz Burgisser as DMVisit www.fateofisen.com to learn more.Fate of Isen is one of the Feedspot top D&D podcasts in the world! Check out Feedspot here.If you like the show, please feel free to follow us on social media (@fateofisen) or support us on Patreon! ★ Support this podcast on Patreon ★ Intro, outro, and recap music by freesound user, Tyops, and ambient sound by TabletopAudio.com

    Build Your Copywriting Business
    Copywriting from a Rural New Zealand Kiwi Farm - Catherine's Story (Episode 257)

    Build Your Copywriting Business

    Play Episode Listen Later Dec 3, 2025 31:36


    Ever feel like some stories just stay with you? That's Catherine's story for me. Catherine stepped away from a successful corporate career to raise her family in rural New Zealand. Then came a cancer diagnosis that flung everything into perspective. On this episode of the Build Your Copywriting Business podcast, Catherine shares how she built relationships with other business owners in her rural community (and beyond), replacing her corporate salary with her copywriting income. But let me be real with you, Catherine's success didn't happen overnight. She's sharing the areas she focused on, how she consistently put in effort, and the pay off she saw as a result of her efforts. I can't wait for you to hear this one, so, without further ado, take a listen! -------------- Mentioned in the Episode Catherine Website8 Tips for Networking SuccessFind Copywriting Clients by Networking in the Right PlacesQuestions You Need to Ask Your Copywriting Clients Related Links Ep. 115: This Teacher Wanted More Work/Life Balance…and Found It with Copywriting – Charlotte's Story --------------- Get Free Copywriting Training here

    No Laying Up - Golf Podcast
    1095: New Zealand - Tara Iti and Te Arai

    No Laying Up - Golf Podcast

    Play Episode Listen Later Nov 26, 2025 151:17


    Back in March, TC and Neil packed their bags and flew across the world to play at Tara Iti and both North and South courses at Te Arai. We recorded this pod several months ago but hope you enjoy The Brothers Schuster reliving their Kiwi experience at some of the most scenic golf holes on the planet. Join us in our support of the Evans Scholars Foundation: ⁠⁠⁠⁠https://nolayingup.com/esf⁠ Support our Sponsors: Rhoback The Stack If you enjoyed this episode, consider joining⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The Nest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠: No Laying Up's community of avid golfers. Nest members help us maintain our light commercial interruptions (3 minutes of ads per 90 minutes of content) and receive access to exclusive content, discounts in the pro shop, and an annual member gift. It's a $90 annual membership, and you can sign up or learn more at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nolayingup.com/join⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Subscribe to the No Laying Up Newsletter here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://newsletter.nolayingup.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Subscribe to the No Laying Up Podcast channel here: https://www.youtube.com/@NoLayingUpPodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

    Up First
    Kiwi vs. Predator

    Up First

    Play Episode Listen Later Oct 26, 2025 23:49


    In New Zealand, a nationwide extermination campaign is underway. It's one of the most ambitious in the world. The country is home to more than four thousand native species that are threatened or at risk of extinction. To protect its biodiversity, New Zealand has embarked on an experiment that aims to eradicate all invasive species by the year 2050. Can the country pull it off? And how far should humans go to reverse the damage we've caused?Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy