Podcasts about Goldfinch

  • 521PODCASTS
  • 709EPISODES
  • 47mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Oct 27, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Goldfinch

Latest podcast episodes about Goldfinch

Chill Filtered
Episode 382: Found North Goldfinch (First Flight)

Chill Filtered

Play Episode Listen Later Oct 27, 2025 74:33


On this episode of Chill Filtered, Cole and Bryan pour the newest release from Found North — Goldfinch (First Flight). Before diving in, they talk about old episodes, healthy boundaries when it comes to alcoholism, and what makes Canadian whisky so unique in the whiskey world.   On Whiskey World News, Bryan reads about Jack Daniel's newest special release: Tanyard Hill Rye.   And on What Whiskey Would You Choose?, the boys ask: What whiskey would you pour your mom to help her realize that not all whiskey is disgusting?   A thoughtful episode with heart, humor, and a special Canadian pour that might just take flight.

This is My Bourbon Podcast
Ep. 403: This is my Found North Goldfinch Review + The Best Widow Jane "The Vaults" Release Ever?

This is My Bourbon Podcast

Play Episode Listen Later Oct 22, 2025 60:28


Send us a textBACK for another review, and this time I'm checking out something from our friends over at Found North. It's a brand new 15 Year Old Canadian Whiskey with some truly special offerings inside that you're not gonna want to miss out on and I think you're gonna want to hear what I have to say. Plus, I crack into the newest Widow Jane Bourbon and see what's going on with the Mythological Oak finish. Things are interesting right now, folks, and I hope you enjoy.Become a patron of the show at http://www.patreon.com/mybourbonpodcastLeave us a 5 star rating and review on your podcast app of choice!Send us an email with questions or comments to thisismybourbonshop@gmail.comSend us mail to PO Box 22609, Lexington, KY 40522Check out all of our merch and apparel: http://bourbonshop.threadless.com/Leave us a message for Barrel Rings at 859.428.8253Facebook: https://www.facebook.com/mybourbonpod/Twitter: https://twitter.com/mybourbonpodInstagram: https://www.instagram.com/mybourbonpod/YouTube: https://www.youtube.com/thisismybourbonpodcastPayPal, if you feel so inclined: PayPal.me/pritter1492Link to our Barrell Rye Armagnac Finished Pick: https://shop.whiskeyinmyweddingring.com/products/barrell-private-release-rye-1a03Support the show

Alabama's Morning News with JT
Kate Lincoln-Goldfinch on how the government shutdown impacts employees

Alabama's Morning News with JT

Play Episode Listen Later Oct 9, 2025 7:10 Transcription Available


Stuart Bowditch Podcasts
Boat Building Near Flatford Mill - 2nd October 2025 (excerpt)

Stuart Bowditch Podcasts

Play Episode Listen Later Oct 5, 2025 20:00


Diners with trays of food and drinks from the cafe, the air con unit drone, this year is a ‘mast year' for local trees which can be evidenced by the abundance of acorns on the ground, mast years are possibly a way that trees work together to create more fruit/seeds in one year that can possibly be eaten by seed eating animals and increasing the likely hood of seedlings growing next year, peduncle, the aircon clicks off, geese, lots of interest in the microphone as it's in a very public place but I keep a low profile, conversations of diners, a dog barking, trays being returned to the rack, a tiny twister picking up leaves, hikers in brown boots and blue jumpers, the kissing gate slamming sound travelling on the wind, a lady carrying a bag full of poo, two dog bowls at different heights, my coffee finished, as is the flapjack, Table 123, walking sticks, ‘John!', John acknowledging where his party are seated, the smell of soup, the Site Manager coming over for a chat about the weekends workshop, a pile of bricks, two yard bags on pallets, ‘Hort Loam' printed on the site of one of them, two men being curious about the mic. A never-ending stream of people, interested, curious, wanting to explore, experience and learn. They're passing by here, passing, being born, passing by and passing again. Here is still here but for how long will the cycle continue? Slowly the cafe activity is winding down towards closure and as the localised sound dies down sound from further afield can reach us, such as a tractor ploughing a the field. The Flatford Accessible Shuttle Citroen electric vehicle, a woman carrying a bunch of yellowing oak leaves, a man opening and closing the gate for the car to pass through, it has slightly flat tyres, a conversation about birds that I don't quite catch, chairs in the cafe being rearranged, cutlery being moved on the collected trays, a puff of wind moving all of the leaves at once but only by a couple of inches, the door to ‘back stage' being closed by Maddie, a very slow wheezing pug in a blue harness, a window being closed, a door being bolted, the last diners leaving the garden, a moment of reflection. The earth wearing lands cape Land belongs We long to live Live to die Die to land the dream of love Love of another The other is wise Wise of words Words escape A cape of good hope But we'll need much more than that. Birds identified (in the full hour recording) are Dunnock, Robin, Goldfinch, Greenfinch, Long Tailed Tit, Mallard, Great Tit, Magpie Wren and Spotted Flycatcher!

Outdoors with Rob Zimmer
October 3, 2025 | American Goldfinch, Raptors + Bats, Native Wildflowers from seed

Outdoors with Rob Zimmer

Play Episode Listen Later Oct 3, 2025 37:44 Transcription Available


Good Morning Portugal!
Portuguese Bird of The Month with James 'Old Guy in Europe' Holley - September 25 #birds #portugal

Good Morning Portugal!

Play Episode Listen Later Sep 22, 2025 8:43 Transcription Available


Mike, Mike, and Oscar
Daniel Day-Lewis Returns + Summer Box Office Report Cards - ORC 8/25/25

Mike, Mike, and Oscar

Play Episode Listen Later Aug 26, 2025 67:21


Summer Box Office Report Cards are the episode's centerpiece. Until then, we discuss Daniel Day-Lewis' return, JLO's latest musical and new animated feature contenders during our Trailer Reviews segment. We review new movies like K-Pop Demon Hunters, Honey, Don't, Relay and Eenie Meenie during our Box Office Report and What We're Watching segments. Plus, AlsoMike turns heel with a shockingly bad review of a beloved classic. Clayton Davis' latest piece on “No Kings” in the Race Thus Far - 1:50 TRAILER REVIEWS: Anemone, starring Daniel Day-Lewis - 5:04 Arco, a new animated feature contender from NEON - 9:04 The Mastermind, a future MMO classic from Kelly Reichardt - 12:26 Hedda, a new YouTube trailer hit gives us Saltburn & sexy Downton Abbey vibes - 15:14 Eleanor The Great, a hope that this movie might not actually be another Goldfinch - 17:31 Kiss of the Spider Woman, a JLO starrer with a 2nd trailer that gives us pause - 19:48 BOX OFFICE REPORT: K-Pop Demon Hunters, Also Mike's Review + Netflix's #1 Box Office Sing-A-Long - 23:20 Talking Weapons' holding strong and the rest of the Top 8 - 24:13 Reviewing Honey, Don't - 25:51 Reviewing Relay, starring Riz Ahmed and discussing the rest of the Top 15 - 27:28 SUMMER BOX OFFICE REPORT CARDS (BY GENRE & SINCE MAY): Superhero Films: Thunderbolts*, Superman & The Fantastic Four - 30:44 Pure Action like Mission Impossible, Ballerina, etc. + A Lone Sports Film Hit in F1 - 34:34 Comedies from Friendship to The Naked Gun - 37:01 Dramas like The Life of Chuck & Eddington + Romances like Materialists - 40:13 Horror is still crushing it at the box office from Clown in a Cornfield through Weapons - 44:06 Family Films are buoying it all from Lilo & Stitch through Jurassic World - 46:55 WHAT WE'RE WATCHING: Other than what we've reviewed already… Eenie Meenie + a Sinners rewatch - 50:35 The Conjuring: The Devil Made Me Do It + The Last Detail revisited by M1 - 52:34 M2's slander filled heel turn for a beloved classic. What has he done?! - 55:55 OUTRO: Make sure to throw your rotten vegetables at AlsoMike via social media. You can find all our links here https://linktr.ee/mikemikeandoscar Otherwise, if you're somehow still taken with us, do please support our show by liking, subscribing, rating, and reviewing our podcast favorably in the eyes of all the algorithms out there. We certainly thank you for sticking by us during these contemptible times.

Building PA Podcast
Surgery Beyond Opioids: Effective Pain Management Strategies with Brand Newland

Building PA Podcast

Play Episode Listen Later Aug 12, 2025 33:05


In this episode of the Building PA Podcast, co-hosts Jon O'Brien and Chris Martin celebrate five years of podcasting while diving into a critical topic: opioid awareness and pain management in the context of surgery. They are joined by Brand Newland, the CEO and co-founder of Goldfinch Health, who brings a wealth of knowledge and a fresh perspective on how to navigate the challenges associated with opioid prescriptions following surgery.Brand begins by addressing the opioid crisis, emphasizing the need for a shift in how we approach pain management, particularly in surgical settings. He shares insights from his experience as a pharmacist and discusses the founding of Goldfinch Health in 2018, which aims to improve the surgical experience by advocating for better pain management practices that reduce reliance on opioids.The conversation highlights the importance of multimodal pain management strategies, which include using non-opioid medications like acetaminophen and ibuprofen, as well as innovative approaches such as the TLC method (Tylenol, Lyrica, and Celebrex) to manage pain effectively before and after surgery. Brand explains how these methods can lead to quicker recovery times and fewer complications, ultimately allowing patients to return to work sooner—an average of 35 days faster than national benchmarks.Jon and Chris share their personal experiences with opioid prescriptions, illustrating the common issue of being prescribed excessive amounts of pain medication post-surgery. Brand reassures listeners that there are alternatives and encourages them to advocate for themselves and their loved ones when it comes to pain management.The episode also touches on the financial implications of better pain management practices, with Brand discussing a recent study that shows a significant return on investment for employers who implement Goldfinch's programs. This is particularly relevant for those in the construction industry, where physical labor is a daily reality, and managing pain effectively can lead to improved productivity and overall well-being.As the discussion progresses, Brand introduces the concept of enhanced recovery protocols and the importance of preparing for surgery in a way that minimizes pain and anxiety. He shares practical tips, such as staying hydrated and consuming clear liquids before surgery, which can help improve recovery outcomes.Towards the end of the episode, Brand discusses Goldfinch's new initiative, the Billion Pill Fudge Program, aimed at reducing the number of leftover opioid pills after surgery. He emphasizes the importance of safe disposal methods to prevent accidental poisoning, particularly among children.

Trilogy Outdoors
Season 3 Episode 114 Talkin Africa with Shi-awela Safaris

Trilogy Outdoors

Play Episode Listen Later Aug 1, 2025 71:44


It was great to have our partners from Shi-awela Safaris and Lodge on with us. With the incredible hunts that the Goldfinch's and all the other guests have been having, it was time to get them on from over in their incredible parts of the world. We are excited to talk about many thing from Stephen's most recent hunts to John D's latest hunts himself. Which include a management hunt for elephants in Zimbabwe. You do not want to miss this great story that we share and you can go make your own stories with our official safari partner. We will be happy to share all the info and you can also checkthem out and get your own trip set upto make mempries of your own by visiting their website at www.shiawela.com Become a supporter of this podcast: https://www.spreaker.com/podcast/trilogy-outdoors--5441492/support.

The Charlie James Show Podcast
H2 - Tuesday July 8 2025 - "Caller theories on No Epstein List " "College Graduates not prepared for workforce" " Charlie not impress with State Sen Stephen Goldfinch" "When will Hollywood learn to stop lecturing us"

The Charlie James Show Podcast

Play Episode Listen Later Jul 8, 2025 33:45


H2 - Tuesday July 8 2025 - "Caller theories on No Epstein List " "College Graduates not prepared for workforce" " Charlie not impress with State Sen Stephen Goldfinch" "When will Hollywood learn to stop lecturing us"

Trilogy Outdoors
Season 4 Episode 113 Russell Fry in Studio

Trilogy Outdoors

Play Episode Listen Later Jun 21, 2025 58:34


As our 7th Congressional District Representative, Russell Fry has done an incredible job of keeping his promises made while platfroming for this most recent election. Our fisheries mis management has been one that he is on and continues to bring up questions to NOAA and its shrinking government body. We are excited to have Russell in the studio this week and we cover a number of topics, as always, when he and the Senator are on the show. Of course we cover plenty of fins, fur, & feathers. But we do discuss some of the current issures facing the country and the world for that matter. We hope you will like and subscribe and be sure to share the link and let us know what you think about todays show and also about any topics you would like to hear more on. We are going to have some great stories coming out of Africa s the Goldfinch's travel abroad in search of trophy game. Go to www.trilogyoutdoorsmedia.com for more info and to get signed up for the Grand Strand Fishing Rodeo. Tight Lines and Enjoy!!!www.fry.house.govBecome a supporter of this podcast: https://www.spreaker.com/podcast/trilogy-outdoors--5441492/support.

The Filmmakers Podcast
'We Live in Time', 'Boy A' & 'Brooklyn' writer & director John Crowley on casting, storytelling and creative vision.

The Filmmakers Podcast

Play Episode Listen Later Jun 17, 2025 42:24


Today, we're absolutely thrilled to have on John Crowley a director, screenwriter and filmmaker whose work consistently delves into the intricate tapestry of human experience with remarkable sensitivity and depth. Dom Lenoir sits down for a natter with John to unpack the creative journey behind this poignant film and explore the themes that drive his artistic vision. John Crowley has a masterful touch for storytelling as he burst onto the scene with his critically acclaimed feature debut, "Intermission" (2003), followed by the powerful and poignant "Boy A" (2007). He captivated audiences and critics alike with the deeply moving "Brooklyn" (2015), which earned him an Academy Award nomination for Best Adapted Screenplay. He then took on the complex literary adaptation of "The Goldfinch" (2019), and his television work includes episodes of the acclaimed series "True Detective" and "Black Mirror." His latest film, "We Live in Time," promises to be another compelling addition to his already impressive filmography. This romantic drama, written by Nick Payne, explores the relationship of a couple, Tobias Durand, played by the incredible Andrew Garfield, and Almut Brühl, portrayed by the brilliant Florence Pugh, over the course of a decade. The film uniquely employs a nonlinear narrative, weaving through snapshots of their lives together – falling in love, building a home, and becoming a family – while confronting a difficult truth that challenges their very foundation. We Live in Time is OUT NOW OTHER LINKS DIRTY BOY Premiere at Raindance tickets https://raindance.eventive.org/schedule/dirty-boy-68234eda5e47ea122831f7f4 FOOD FOR THOUGHT documentary out NOW | Watch it HERE. A documentary exploring the rapid growth and uptake of the vegan lifestyle around the world. – And if you enjoyed the film, please take a moment to share & rate it on your favourite platforms. Every review & every comment helps us share the film's important message with more people. Your support makes a difference! PODCAST MERCH Get your very own Tees, Hoodies, onset water bottles, mugs and more MERCH. https://my-store-11604768.creator-spring.com/   COURSES Want to learn how to finish your film? Take our POST PRODUCTION COURSE https://cuttingroom.info/post-production-demystified/   PATREON Big thank you to: Serena Gardner Mark Hammett Lee Hutchings Marli J Monroe Karen Newman Want your name in the show notes or some great bonus material on film-making? Join our Patreon for bonus episodes, industry survival guides, and feedback on your film projects!   SUPPORT THE PODCAST Check out our full episode archive on how to make films at TheFilmmakersPodcast.com   CREDITS The Filmmakers Podcast is written, edited and produced by Giles Alderson @gilesalderson Logo and Banner Art by Lois Creative  Theme Music by John J. Harvey Learn more about your ad choices. Visit podcastchoices.com/adchoices

Mission: Employable
Episode 210 – Iowa Lab Saves Wild Money With Earn & Learn

Mission: Employable

Play Episode Listen Later Jun 17, 2025 22:41


  This episode of the Mission: Employable podcast dives into an Earn and Learn program that's saving one Iowa organization money while reinvesting in its workforce, Goldfinch Laboratory plays an important role in Iowa's medical industry, testing samples and biopsies sent over by other healthcare providers. However, the lab felt they were spending too much money on temporary hires. The solution? Find and train their own staff with an Earn and Learn program. Stephanie Allen, Director of Laboratory Operations, and Nancy Leiva, Shift Lead, joins us to share how they made the decision to start this program from the ground up. Find out how the program is not only solving their staffing problem but saving them a ton of money in the process.  

The Phlegm Cat Podcast
American Goldfinch, Mama Let Me Be

The Phlegm Cat Podcast

Play Episode Listen Later Jun 9, 2025 90:22


The Artist shares great family news! Your Huckleberry then befouls the great album Rumours. Ground Chucky, the Boy Made of Meat finds his voice but can it stop Mex from buying a snake?

BirdNote
Vivaldi's Goldfinch

BirdNote

Play Episode Listen Later Jun 3, 2025 1:43


Bird song caught the ear of Italian composer Antonio Vivaldi. And he even named a 1729 flute concerto for a bird — the goldfinch. The source of inspiration for Vivaldi's Goldfinch concerto, or Il Gardellino, was the European Goldfinch, a tiny bird found throughout much of Europe, where it frequents gardens and roadsides. No wonder Vivaldi found the goldfinch irresistible. More info and transcript at BirdNote.org.Want more BirdNote? Subscribe to our weekly newsletter. Sign up for BirdNote+ to get ad-free listening and other perks. BirdNote is a nonprofit. Your tax-deductible gift makes these shows possible.

Lisburn Free Presbyterian Church
The Goldfinch That Fell From the Nest

Lisburn Free Presbyterian Church

Play Episode Listen Later May 21, 2025 4:53


Stuart Bowditch Podcasts
Reed Bunting and Cuckoo, Fen Bridge Lane, East Bergholt, Essex - 17th May 2025

Stuart Bowditch Podcasts

Play Episode Listen Later May 18, 2025 30:00


This recording has come about by my activities on the Constable Ambisonic project, where I'll be making ambisonic sound recordings of 20 locations of paintings by John Constable. https://www.constableambisonic.co.uk/ As I explore and reacquaint myself with 'Constable Country' I have been recording in a variety of locations in and around the Dedham Vale. This recording was made on a footpath leading up the hill from Fen Bridge Lane in East Bergholt on a lovely warm sunny afternoon. The first bird that I heard as I started walking to the site was a cuckoo and soon many more birds joined the throng, including Reed Bunting, Wren, Blackbird, Chiffchaff, Song Thrush, Robin, Skylark, Whitethroat, Goldfinch, Blue Tit, Cuckoo, Chaffinch, Stonechat, Linnet, Dunnock, Blackcap, Crow, Pheasant, Great Tie, Greylag Goose and Magpie. As much as my birding skills are improving I still rely heavily on the excellent Merlin Bird App https://merlin.allaboutbirds.org

A Gardener's Notebook
American Goldfinch in our friend's feeder in Denver, Colorado [Photography] [Video]

A Gardener's Notebook

Play Episode Listen Later May 8, 2025


  Find more of my photos on PixelFed @douglaswelch American Goldfinch in our friend's feeder in Denver, Colorado #goldfinch #bird #birds #birding #animal #wildlife #denver #colorado #feeder ♬ Spirited Away Continue Reading Read more on this topic: Tulips, Denver Botanic Garden, Denver, Colorado [Photography] Scene, Denver Botanic Garden, Denver, Colorado [Photography] Tulips, Denver Botanic Garden, Denver, Colorado Tulips, Denver Botanic Garden, Denver, Colorado [Photography] Tulip Reflections, Denver Botanic Garden, Denver, Colorado [Photography]

My Word with Douglas E. Welch
American Goldfinch in our friend's feeder in Denver, Colorado [Photography] [Video]

My Word with Douglas E. Welch

Play Episode Listen Later May 8, 2025


On the Nature Trail - A Podcast

In this episode of On the Nature Trail, we spotlight one of New Hampshire's most cheerful summer birds – the American Goldfinch. From their acrobatic feeder antics to their vibrant yellow plumage, these seed-loving songbirds are a true seasonal delight. Whether you’re watching from your feeder or walking in a wildflower meadow, this episode invites […]

Quick Book Reviews
Unleashing A Language of Dragons: S. F. Williamson on Magic, Power & Storytelling

Quick Book Reviews

Play Episode Listen Later Apr 18, 2025 28:18


Interview with S F Williamson about A Language of Dragons S F Williamson recommends:When Women Were Dragons by Kelly BarnhillThe Awakening of Miss Primm by Sanmartin Fenollera and translated by Sonia Soto.The Goldfinch by Donna TarttThe Quick Book Reviews Podcast can be found:Instagram: https://www.instagram.com/quick_book_reviewsBluesky: https://bsky.app/profile/quickbookreviews.bsky.socialThreads: @quick_book_reviewsTikTok: https://www.tiktok.com/@quickbookreviewsTwitter: https://x.com/quickbookrevie3 Hosted on Acast. See acast.com/privacy for more information.

DTFae
book discussion: Gild

DTFae

Play Episode Listen Later Mar 27, 2025 33:10


Send us a textLet's yap about Gild by Raven Kennedy. Summon us @DTFaePodcast We like our coffee icy and our books spicy! Oh, and we're totally Down To Fae. A podcast for fantasy romance readers and fans of authors like Sarah J. Maas, Jennifer L. Armentrout, Rebecca Yarros and Carissa Broadbent. Follow along as your delulu hosts discuss your favorite romantasy books in a chapter-by-chapter read, re-read or refresher.

Antlered Path Podcast
Season 2 - Episode 8 'Awakening and renewal'.

Antlered Path Podcast

Play Episode Listen Later Feb 24, 2025 36:18


To support the podcast please click here. Show notes"Awakening and Renewal"A belated Imbolc episode, though I feel the content is still relevant! So, wow it has been some months since the last episode and we are so glad to be sharing this with you now.Join me out on the trods and connect to Imbolc tide blessings and energies with this episode that focuses on the shifting of Winter into Spring. There are blessings and a poem, there are trodcasts too!Oh and I pronounce the name of the Goddess Brigid in the contemporary Irish 'Breej'. It is what felt right!The trodcasts that I share in this episode were recorded in different local West Dorset sacred spaces in late January. The first one is in Cattistock churchyard where there is a beautiful little well and Snowdrops, plus lots of incredible birds. The following one is in the garden by Silver Well at Cerne Abbas and then the Beech Grove at Giant Hill Cerne Abbas. This is a very special place of pilgrimage, myth and recent archaeological exploration. There is a large Beltaine/May Day gathering that takes place at the well and on the hill with a local Morris side at Dawn. I simply share the messages and insights that flow through me whilst connecting to spirits of place. I hope they resonate with you.It was my intention to release this episode in early February, so apologies for the later publishing!Imbolc is still taking place, so hopefully you will feel the awakening and renewing Imbolc blessings reaching you where you are.The blessings and poem:First blessing is by Caitlin Matthews 'Brigid of the Mantle' from Little Book of Celtic blessings and an excerpt by John O'Donohue fromthe blessing, 'For Presence,' found in Benedictus (Europe) / To Bless the Space Between Us (US).'Goldfinch' from the 'Lost Spells' by Robert Macfarlane and Jackie Morris.Many blessings across the Ways to you.With Love, Hilary and Tony x

Second in Command: The Chief Behind the Chief
Ep. 451 - Lincoln-Goldfinch Law Owner, Kate Lincoln-Goldfinch

Second in Command: The Chief Behind the Chief

Play Episode Listen Later Feb 20, 2025 30:07


In today's episode of The Second in Command podcast, Cameron is joined by Kate Lincoln-Goldfinch, the owner and CEO of Lincoln-Goldfinch Law – Abogados de Inmigración.During the conversation, Cameron and Kate explore the nuances of navigating complex professional relationships. You'll hear about the challenges of balancing different leadership styles, the importance of communication, and the impact of workplace culture on collaboration. With contrasting perspectives on accountability and conflict resolution, they reveal the delicate dance required to maintain harmony while driving an organization forward. The discussion highlights a crucial mindset shift — moving from problem-solving to coaching. Instead of acting as intermediaries, effective leaders empower their teams to handle difficult conversations directly, fostering personal growth and stronger relationships.Beyond the day-to-day struggles, the conversation touches on the deeper bonds that sustain long-term partnerships. Whether through shared experiences, intentional connection, or simple gestures of appreciation, investing in professional relationships is just as important as driving results.If you've enjoyed this episode of the Second in Command podcast, be sure to leave a review and subscribe today!Enjoy!In This Episode You'll Learn:Kate's background in immigration law, including a life-changing experience with a detained asylum-seeking family. (2:41)The importance of regular communication and avoiding duplication of efforts between the CEO and COO. (13:00)The differences between a COO and a director, focusing on leadership and vision versus hands-on execution. (18:06)The challenges of transitioning from a culture of leniency to one of accountability. (22:00)And much more...Resources:Connect with Kate: Website | LinkedInConnect with Cameron: Website | LinkedInGet Cameron's latest book "Second in Command: Unleash the Power of Your COO"Get Cameron's online course – Invest In Your Leaders

BookTok Made Me Podcast
Goldfinch Part 2 - The Plated Prisoner Series 6

BookTok Made Me Podcast

Play Episode Listen Later Feb 18, 2025 50:49


Bridget, Caitlin, and Hilda discuss the last half of "Goldfinch," the sixth and final book in Raven Kennedy's Plated Prisoner series.  Not to spoil anything, but it's a fantastic end to a great series ... and who knows, maybe the door has been left open to a spinoff. We sure hope so! Join our Patreon for exclusive behind-the-scenes content and let's be friends!Instagram > @Booktokmademe_podTikTok > @BooktokMadeMe

BookTok Made Me Podcast
Goldfinch Part 1 - The Plated Prisoner Series 6

BookTok Made Me Podcast

Play Episode Listen Later Feb 11, 2025 47:45


Join Bridget, Caitlin, and Hilda to discuss the first half of "Goldfinch," the sixth and final book in Raven Kennedy's The Plated Prisoner series. And we're all in agreement when we say the first half doesn't disappoint, and as the cherry on top, we get a glorious display of King Ravinger's special talent - iykyk.  Join our Patreon for exclusive behind-the-scenes content and let's be friends!Instagram > @Booktokmademe_podTikTok > @BooktokMadeMe

Alabama's Morning News with JT
Immigration Attorney Kate Lincoln-Goldfinch on Trump's plans for the southern border

Alabama's Morning News with JT

Play Episode Listen Later Jan 21, 2025 5:46 Transcription Available


The Big 550 KTRS
CarneyShow 01.09.25 Dr. Robert Marbut, Kate Lincoln-Goldfinch, Shannon Kingston, Brendan Wiese

The Big 550 KTRS

Play Episode Listen Later Jan 9, 2025 128:52


CarneyShow 01.09.25 Dr. Robert Marbut, Kate Lincoln-Goldfinch, Shannon Kingston, Brendan Wiese by

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
2024 in Post-Transformers Architectures (State Space Models, RWKV) [LS Live @ NeurIPS]

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 24, 2024 43:02


Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe

ThinkEnergy
Holiday Rewind Part 2: Unwrapping the energy transition

ThinkEnergy

Play Episode Listen Later Dec 23, 2024 51:42


The final episode of thinkenergy in 2024 unwraps on the year's biggest topic: the energy transition. Learn how it's shaped discussions and actions across the energy sector, as we revisit the most insightful moments from past episodes, including expert insights on sustainable practices, investments needed for future transformations, and the impacts on rural, remote, and urban communities. Tune in for a holiday rewind of how the energy transition affects Canadian consumers, businesses, and the environment.   Related links   ●       Episode 144 (The what, where, when, and how of Canada's energy transition): https://thinkenergypodcast.com/episodes/the-what-where-when-and-how-of-canadas-energy-transition/ ●       Episode 140 (Current affairs with Francis Bradley, Electricity Canada's President and CEO): https://thinkenergypodcast.com/episodes/current-affairs-with-francis-bradley-electricity-canadas-president-and-ceo/ ●       Episode 141 (Decarbonizing and electrifying your home, with Sarah Grant of Goldfinch Energy): https://thinkenergypodcast.com/episodes/decarbonizing-and-electrifying-your-home-with-sarah-grant-of-goldfinch-energy/ ●       Episode 142 (Electrifying Canada's remote communities with QUEST Canada): https://thinkenergypodcast.com/episodes/electrifying-canadas-remote-communities-with-quest-canada/ ●       Episode 142 (Turning energy consumer interest into action with EY Global): https://thinkenergypodcast.com/episodes/turning-energy-consumer-interest-into-action-with-ey-global/ ●       Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-cem-leed-ap-8b612114/ ●       Hydro Ottawa: https://hydroottawa.com/en    To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405   To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl   To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited   Follow along on Instagram: https://www.instagram.com/hydroottawa   Stay in the know on Facebook: https://www.facebook.com/HydroOttawa   Keep up with the posts on X: https://twitter.com/thinkenergypod   Transcript: Trevor Freeman  00:07 Welcome to think energy, a podcast that dives into the fast, changing world of energy through conversations with industry leaders, innovators and people on the front lines of the energy transition. Join me, Trevor Freeman, as I explore the traditional, unconventional and up and coming facets of the energy industry. If you have any thoughts, feedback or ideas for topics we should cover, please reach out to us at thinkenergy@hydrottawa.com. Hey everyone and welcome back. Well, we find ourselves here at the tail end of 2024 about to wrap up the year. Hopefully you are all looking at some restful holiday plans, a chance to sort of unwind and decompress after what seems to be the same every year, kind of a busy year. There's always lots going on, but hopefully you're looking forward to some downtime over the holidays. I know I certainly am, as is normal, at the end of the year, we are looking back on the year that was the year that we've just gone through. And I'll say right off the bat that I'm really grateful for this year and this chance to step into the host role of the think energy podcast earlier this year, I took over in March of this year, when the previous host, Dan Seguin, retired, so I'll express my gratitude right off the bat to Dan and team for sort of pioneering this podcast over the previous years and then trusting me to take over the host chair. It's been a really fun journey and fun to kind of engage with our guests on different topics that I'm really passionate about you guys know from listening to this that I really like talking about climate change and energy and the energy transition, and this is a really cool and neat platform to be able to do that. So, thanks to the team for trusting me with that role. One thing we've been doing, as we've been looking back, is trying to figure out, you know, what is the main theme of this podcast here? What do we actually talk about? In our last episode you know that we did a bit of a summary of some of the top episodes from the year, in terms of, you know, interest from you the listener. For this one, what we wanted to do is really embody the theme of the year, and I think it should be no surprise that the theme is the energy transition. I mean, that's kind of the theme of the podcast. I know we touch on other aspects of working in the energy sector, but the energy transition is really the all-encompassing theme or thing that we talk about, and we spend a lot of time on here in this podcast, and so we wanted to bring you some of the episodes that really talk through what that energy transition is, and what does it mean for us. What does it mean for us as energy consumers, as homeowners and people that work and own and run businesses, as people that work in the utility industry and are making decisions about the future of energy? So, we've picked a few clips from the year that we think really embody that. So, get comfy, hopefully you're warm inside, as it's maybe snowy out where you are, maybe not, maybe you're listening from somewhere warm. But get comfy and have a listen to what we think are some of the clips that really embody what this year was about when it comes to the energy transition.  To start things off, I think it would be good to and unfortunately, you're going to have to listen to my voice for another little bit longer. It'll be good to start with an episode I did not too long ago, which was really a primer on the energy transition, which really focused on helping everybody wrap their heads around what exactly is this thing that we talk about called the energy transition. So have a listen to this clip from that. And if you're interested, go back and listen to the whole episode. When we think about the energy transition, we probably mostly think of this ongoing shift to cleaner emissions free energy. So EVs over gas cars, heat pumps over gas furnaces, etc. That is definitely part of it. In fact, that's a major part of it. But like most things in life, it's never just as simple as that. The energy transition is a truly fundamental shift in our global relationship with energy, which includes not just what makes our cars go, but everything from how, where and when we generate energy, how, where and when we store and use energy, how we pay for the energy we use, how we finance and pay for energy projects and the systems that we need to do all the things I just mentioned. It will include a shift in what policies and regulatory guidelines and barriers we put in place to protect the public, but that also encourage change that we want to see happen to allow for innovation and advancement. It isn't completely throwing out everything we have and starting from scratch, although some things will disappear, like coal fired electricity generation, for example, but in a lot of areas, it is building on what we've already got at a pace that we haven't seen before, or at least in a very long time. I think that's a key point here. One of the things that makes the energy transition, a change worth noting is the pace of change that we will see. Things have never really been static in the world of energy, from that time when our earliest ancestors first sparked that fire, this is the poetic part that I mentioned earlier, our relationship to energy has never really stood still, but other than a few significant events, the upward trend in sophistication and growth and scope has been fairly linear, gradual, one step after the other, et cetera. It's those exceptions, though, those things that are different from that gradual, linear growth that probably most closely resemble this period of change that has started that we're calling the energy transition. Take the Industrial Revolution, for example. For decades and centuries prior, there had been gradual improvements in how we got around or how we work the fields. Let's say, you know, first by hand, then with tools, maybe a better plow came along. We started using a horse or an oxen to pull that plow, etc. That along comes the steam engine, and all of a sudden, things take off like never before. It wasn't just a matter of swapping out a horse for an engine. It may have started there, but entire economies and aspects of society changed or sprang up where they didn't exist before one change rolls into another and another in quick succession, and before too long, things that couldn't be imagined only decades before are suddenly a reality to a degree, that's what we're looking at today with the energy transition. How far that change goes remains to be seen, but it's pretty clear that we have begun one of those disruptive periods of change that will be looked back on as a major turning point. So yes, the energy transition is about shifting away from greenhouse gas emitting fossil fuels, coal, oil, natural gas, et cetera, to renewable, non-emitting energy sources, solar, wind, hydro, nuclear, etc. But it's also so much more. Even without climate change, our need for energy is growing at an exponential pace. In Canada, we're fortunate in that we have a strong foundation with a relatively decarbonized grid already, so about 80% carbon free nationally, and a diverse mix of hydro, nuclear and renewables like wind and solar. But it's still going to take quite a lot of effort to decarbonize that remaining 20% at a time when, as I keep mentioning, demand is increasing rapidly. In Ontario, our electricity system operator, the ieso, just updated their future demand projections to show that provincial demand will be 75% more or less high by 2050 than it is today. This means we also need to invest in our grid infrastructure to ensure it can handle the increased load, as well as utilizing things like decentralized generation and storage to ensure we don't over build not to mention making sure we can handle more extreme weather.  So, I think that's a good place to set the stage for us. But now let's get into some of the real experts on this. And we'll go next to a conversation that I had with Francis Bradley, who's the president and CEO of electricity Canada. Electricity Canada is the sort of national voice of sustainable electricity. Here in Canada, they represent 40 of the largest utility companies. So that's companies that generate, transmit and distribute electricity from coast to coast all across Canada. And Francis and I talked about what level of investment is going to be required in order to accomplish some of those aspects of the energy transition that we talk about. So, here's what Francis had to say about that.   Francis Bradley  09:02 I mean, these are, these are great questions in terms of what the investments are going to look like. And so, you know, we're looking at, as I said earlier, doubling the doubling the grid, we're going to need at least two times more kilowatt hours when we get to the future. So, you know, that's the level of investment that we need to be thinking about. There have been different organizations that have tried to kind of get a scope and scale of what that actually looks like. Again, I mentioned the RBC climate Institute last year. It had a study that came out, and I believe they, they peg this at, I think was $2 trillion was the was the amount that they expected this to cost? Where's the money coming from? Well, you know, that's a really good question, and it's one that we've been engaging in for a number of years now. And I'll try not to be, like, totally pedantic on this, but you know, if you can consider from a public policy standpoint, if we believe that expanding the electricity system is necessary to decarbonize the Canadian economy, then essentially, what you're saying is that expanding the electricity system is a public good from, you know, from an economic theory standpoint, if it's a public good, well, then it is something that should be borne by that taxpayer, not the ratepayer, right? And so, you know, part of this discussion is, who needs to bear the costs for building out a clean, non-emitting electricity system so that the rest of the economy can decarbonize. Should it be the electricity customer, or are there parts of this, this core infrastructure, that that are regarded as a public good, and it's something that this paid for by the taxpayer, you know, and we see this in other sectors, other sectors as well, where, you know, certain things are perceived to be public good and their taxpayers supported. And we saw a bit of a recognition and a realization that this made sense to a degree in the federal government's budget in 2023 where, you know, they essentially pledged one in every $8 in new spending was going to clean electricity projects through a variety of needs. You know, the investment tax credits, the Canada infrastructure bank, a number of funding mechanisms. So, I mean that those kinds of dollars from the federal government was a commitment to building infrastructure that really is unheard of at a national level since the Second World War. So, you know, it really kind of moved clean energy and electrification into the category of, well, I guess it's a public good. Because, you know, there's a recognition that if the federal government wants to achieve these policy objectives, it needs to put some federal dollars in. So, you know that determination is and whether it's a public good or not, has been made in favor of the taxpayer versus the rate there. Now, again, you know, you could easily say, well, hang on a second, the rate payer and the taxpayer the same person, except that it doesn't quite work the same way. You know, do we want to attach to the customers' bills, every single customer, the cost of you knows, this, this expansion of our infrastructure or not. And you know, electricity bills are not something that fall, as taxes do disproportionately on those that are wealthier, right? And so, it is a little fairer. Now, you know, in terms of the specific investments, you know, I think exactly how this is going to happen and how it's going to roll out. Those details are still being worked out by some of our members. But I do want to highlight that, you know, the approach here that we're seeing from the government, which we appreciate, is, you know, a one that is so far technology and agnostic, which we think is the right way to go. So, you know, we there isn't, like, a right way or a wrong way to generate electricity. So, you know, the future that we see is going to be an all of the above future that will encompass wind and solar and nuclear and traditional hydro and and hydrogen and carbon capture and storage and more. Not only does that give us, you know, the greatest flexibility, and gives us the ability to balance different types of generation, dispatchable versus non dispatchable. But it also gives us, you know, overall, a far more flexible system. So, you know, that's the what the future is going to look like. So, to, you know, to give you the short answer, it'll be all of the above, and it'll be probably $2 trillion. You know, I kind of touched on this a couple of times, but No, first and foremost, the energy transition, if you will, as I noted earlier, can't be paid exclusively by the ratepayers, right? You know, this is an overall objective that we have. And so, you know, the infrastructure build is so large that that it needs to be, certainly, parts of it need to be paid through the tax system, and that that is progressive in a way that rates are not progressive to begin with. Now, you know, but boy, addressing vulnerable customers absolutely critical. Now there's a variety of things that that could be tried. You know, in the United States that there's a Low-Income Home Energy Assistance Program that it helps keep families safe and healthy through initiatives that assist families with energy costs this, I think they call it the LIHEAP provides federally funded assistance to reduce the costs associated with home energy bills, energy crises, weatherization and minor energy related home repairs. So you know, a similar initiative in Canada could be there to assist the but the most vulnerable, you know, as you're aware, you know, your most vulnerable customers are the ones that have the least capacity to do things like weatherization, and so, you know, there's an example of a national program that we could look at as a model.   Trevor Freeman  15:16 Francis is spot on there when he talks about not only the energy transitions potential impact on our most vulnerable. So those living in energy poverty areas are struggling with energy affordability, but also everybody who's looking to make improvements to their homes where they live in order to reduce costs and participate in the energy transition. That brings us to my next clip that I'd like to share with you, and that's a conversation I had with Sarah Grant. Sarah Grant's a good friend of mine who also happens to be an expert in her field, which is helping everyday Canadians on their journey to decarbonizing their homes so that they can contribute to Canada's energy transition. Looking at, what are the things you do in a home to decarbonize. How do you go about that? That process, Sarah and her company, Goldfinch energy are based out of Toronto, and I was really great to hear what she had to say about what it takes to decarbonize a home.   Sarah Grant  16:18 Okay, going from large to small so the largest source of emissions in a home is your space heating. Typically, the emissions are about the same as driving a sort of a mid to large sized car. You know, most people drive, on average, 15,000 kilometers a year. The emissions are going to be about the same so that's going to be the biggest one, if someone is looking and they're a little bit overwhelmed, and the best alternative is a heat pump. So these are they come in many different forms, but the most common, and I think the most common scenario for most homes, is if you have forced air, so ductwork and these kind of heat pumps can extract heat from the air outside. A lot of them can work up to minus 30 degrees. So even up to minus 30, they're able to grab latent heat in the air and pump it inside, and then it gets pumped around your house. The cool thing about them is that they can also work in reverse. So in the summer, they act just like an air conditioner. In fact, the technology is very much the same as an air conditioner, just that they work in reverse in the winter too, so they can also cool. So these are called Air source heat pumps. And yeah, if someone has forced air and they have a gas furnace or an air conditioner or both that need to be replaced, an air source heat pump is, a great option. A lot of the folks that we've worked with that have switched we talked about comfort, sort of, some of the side benefits, I would say, of a heat pump is they're typically quieter, if designed and sized and installed properly, they're quieter both the outside and the inside aspects of a heat pump, and the air from the vents is a lot more comfortable. So, we got a heat pump about three years ago, and the first winter we had it installed, my father-in-law came over for dinner one night and just stood in front of the vent, kind of like a cat basking in that warmth, and said, Oh my gosh, this is way more comfortable. It's not that dry, scorched air that a lot of people associate with, with four stairs. So that's, that's an air source heat pump. You can also, there are also ground source heat pumps, but for a lot of you know urban areas, these ground source heat pumps involve drilling into the ground, either horizontally or vertically, to extract heat from the ground. They, they, I have worked with a few homes in sort of more rural areas where it does make sense, but the costs associated with them are, are really high, and often there's not enough space in urban areas, so they're not quite as common. And I'd say, sort of, just to kind of close the conversation on, we'll conclude it on the on the heating side of things, if you do have another source of like heat, maybe it's maybe it's cast-iron radiators or baseboards, there are also heat pumps that can help you as well. So, with cast iron radiators, they're what's called air to water heat pumps. So, they'll the outdoor unit will look similar to someone who has forced air. So, it's an it's going to extract heat from the outside air, and it'll transfer it to water now that can then go through your cast iron radiators, or maybe have in floor heating or what have you. They're not as common, but the technology has existed for a long time in Europe, and there are more products and contractors that I'm working with that are becoming more comfortable with installing this technology. And last there are called ductless heat pumps. So if you don't have ductwork or cast iron radiators, or maybe have baseboards, or maybe there's a space where you know the ductwork just isn't sufficient, these ductless heat pumps can be installed. They can either go on the wall, on sort of these big white boxes. If you've been to Asia, you're probably familiar with them because they exist there, either in the form of heat pumps or or air conditioners, or you can have little floor mounted ones as well, which look a little bit slicker, I suppose, but they do cost a little bit more. So that's heating for hot water. There are kind of two main options if you want to get off of fossil fuels. Usually that's, yeah, um, for most of us, that's with the gas, but there could be propane as well. So, if you want to get off of fossil fuels with your hot water, the heat pump technology exists with hot water as well. Heat Pump hot water tanks is what they're called. Are actually confusingly, sometimes hybrid tanks, because they use heat pump technology, but then also have an electric coil so they operate. They can operate like a simple electric tank if, if needed. And they come with a little like Wi Fi app too. So, they are, like, four times more efficient than a gas hot water tank. So, you will save a little bit by switching to them. But the way they work is they'll extract heat from your basement, actually, so from your basement air and transfer that to the water. So I would say about half the people I work with end up going with them because they have a space where it makes sense. Maybe their basement is large and they can put it kind of in the corner and a big mechanical room or a workshop where they're not going to go into it. So if it, if that heat pump reduces the temperature by two degrees or so, it's not a big deal. But for me, my home is pretty tiny, and we're using every nook and cranny with five of us in it. So we opted for an electric tank and then paired it with a timer so that it only reheats the water overnight when electricity, if you're on time of use, is cheapest, and that's also when our Ontario grid is using the non-fossil fuel related forms of power production, like nuclear and water, so that can work, if you're really lucky, and you have an open an unfinished basement and a good space to install what's called a drain water heat recovery system. These are super cool, very simple technologies that can transfer the heat from any water that you've already used, like from your shower, and transfer it to the fresh water before that fresh water then goes into whatever heating mechanism you have, so they can work with anything, even if you have a gas hot water tank, a drain water heat recovery system is a good way to kind of preheat the water by extracting the heat from the hot water you've already used. A lot of hospitals I know in Toronto are starting to use these kinds of systems as well. So two main options, electric tank, you pump out water tank, and then those drain water heat recovery systems as well, and hot water. So, you know, I said you're heating, heating your house. It's usually about kind of 8080, or so percent of a home emission, home emissions hot water is, is around 15 to 20% just to give an idea of sort of how it fits into the relative picture. But ultimately, I wouldn't say, you know, do one over the other, unless you know, if you have, if you have a hot water tank that's broken, replace that with a with an electric tank or heat pump, hot water tank. Don't, don't just say, Oh, it's only 20% I shouldn't do that one. It's still worth it. Every little appliance that you can get off of fossil fuels is one step closer to then being able to disconnect from the gas utility or what have you, and sets you up for, ultimately, like, a little bit of savings too, because you're no longer paying for that delivery fee to have access to that fossil fuel in your house. So cooking, cooking is cooking is probably, to be honest, like, the most fun of all of these just because, you know, it impacts your daily life. If, like, hot water and heating and cooling are one of those things where you don't like, I don't think about my heat pump unless it's not working properly, which we haven't had an issue with. But, you know, it just sits there and it does, it does its thing, and I'm happy to have it off of fossil fuels. But for cooking, switching, for us switching. We switched from a gas stove to an induction stove about a year ago, and it's amazing. Like, I've got little kids, and I love that. I feel comfortable teaching them how to cook on this stove just because of the way the induction stove works. You're not the whole cooktop doesn't get heated up in the same way you accidentally leave, like a rag or a paper towel on the stove, not going to catch on fire. We did have a few of those incidences with our former gas stove and like, it's really quick. I know that there's a lot of stats and data about how quick you can heat up water, but it's one of those things that you don't believe it until you sort of experience it yourself. So yeah, so we got, we got a nice slick induction stove, because our gas stove was kind of reaching its end of life, and we are starting to smell some of the gas as well, even when it wasn't on, which I know is an issue, that's, that's, you know, something that's, that's hazardous for our help. And you know, there's a lot of research and evidence out there related to like respiratory issues and gas related cooking. So if you do have a gas stove and aren't able to afford to switch now, make sure you're using your exhaust, like your range hood properly, not just when you're using the top, but when you're cooking in the oven too. But yeah, if you're able to switch it out, then you can just really leave here knowing that you're not, you're not using some sort of like fossil fuel to cook with. And so your house is cleaner, and you're making the planet a bit cleaner as well.   Trevor Freeman  25:56 So with that clip, we can all kind of plan out our projects for 2025 and beyond, if we haven't already great to hear that from Sarah again, those are some real, tangible actions that we can take, or we can plan to take in the near future. So the conversation that you just heard with Sarah is really focused around homeowners, but in the context of a kind of an urban setting, you know, you've got access to contractors, you've got access to expertise, you've got access to supply chains. But there are a whole host of people, our neighbors, in our fellow country, people living in remote communities that are just not connected to a national grid or a provincial grid, or even to a natural gas grid. In some cases, I had a really great chat with Gemma Pinchin from Quest Canada, who is leading some research on how rural and remote communities, including many indigenous areas, can engage in the energy transition equitably and sustainably. And we talked about some of the challenges that those areas and communities face. So have a listen to this chat with Gemma Pinchin.   Gemma Pinchin  27:09 Through quest projects, particularly the net zero community accelerator, which works with communities to the end goal is to create community energy and emissions plan we saw, and also through policy work and those kinds of pieces, we saw that there was the net zero transition is sort of chugging along, but there's kind of been a gap. The Transition tends to focus more on the urban context. You know, urban population centers, the big cities, Toronto, Montreal, Vancouver, those kinds of places, and that we saw as leaving out a really big chunk of Canadians. I think the statistic off the top of my head is 1/5 of Canadians live in rural and remote places. So it's not a small statistic. So we wanted to make sure that, as the net zero transition was moving along and progressing, that this large group of Canadians weren't forgotten about, and the net zero transition is going to rely, and has been relying, on rural land, rural populations, you know, to house Renewable Energy for Food production as well as carbon sequestration. So leaving this big group of people out is just kind of inconceivable, I guess. And what quests saw was that this was happening. So we started this research project to sort of make sure that those voices were being heard and considered as Canada moves through the net zero transition. So I think there's this idea that a one size fits all solution for every community, and that solutions that work in urban centers will work in rural centers, and that's just not the case. For example, something obvious like transportation, and my literature review highlighted that within urban centers, the most sustainable option would obviously be public transport. But if you apply that same lens to a rural community. You know, cars are bad, and we shouldn't be using them. Rural communities, it's almost impossible to be sustainable in net zero because they don't have the public transport option. So in that context, looking at it with a different lens, looking at it with a rural lens, you would look at sort of like consolidating car trips or making sure that services like health care and groceries and you know, the things that we take for granted in urban centers, making sure those are close, like they're kept in communities, like a lot of services are kind of moving out of rural communities. And that doesn't necessarily seem like a net zero issue, but when people in those communities have to drive, like, three times as long to get to their doctor, that's a huge, you know, emissions issue, you know. And it's just it was an interesting look at the way that we're even myself, before I was doing this, I was like, well, cars are bad, like, you know, like, gas, cars aren't great for emissions, but the reality is, for rural communities, they need this transportation that there's no there's no other way for them to get around, and it would be incredibly isolating, and you can't function as a society if you're just stuck in your house, you know, so having that different lens, and looking at it in a different context, I think that's really, really important as we move rural communities through this net zero transition. Well, in those communities that aren't connected to, you know, natural gas or the electricity grid, diesel used to be their only option. You know, modern life, we need electricity power like we need to power modern life. You can't have a modern existence without some form of power. So, you know, diesel, they are completely reliant on diesel, the ones that aren't connected. I mean, it's frustrating because there's these communities do tend to be quite far away from the power grid infrastructure. So, it's usually considered economically non-viable to connect those remote communities to the provincial power grids, because these communities are also very small. So it's a small number of people that you would have to spend all this money for the infrastructure to get, you know, the power lines to them and Canada, Ontario, Canada, both of them are very big, so there's many communities that exist quite far away from power lines or existing grid infrastructure. So yeah,diesel just it's kind of been their only option for power to have a modern existence up until, I would say, Now, well, recently. So I think in terms of, I mean, for rural and indigenous communities, I definitely think we need more research like what I'm doing, I think these are voices that haven't necessarily been heard, and if we're going to have an energy transition, we need to include these voices. And I think the best way to do that is to sort of do research like mine and figure out what, what their needs are, and how we can, how we can progress to that next step, there's, I mean, there's some amazing thinking, specifically of like, indigenous organizations that are already doing great work in this, this space, like indigenous clean energy and the Center for Indigenous Environmental Research. So they're, they're already doing this, but just consolidating all of that, that, and having people governments actually listen, I think is really, really important. I think, yeah, those voices just need to be heard and listened to. Otherwise, we're not going to get anywhere. It'll be like you said, like we just putting in technology and then just kind of, like leaving it there, and that's not, it's not going to that's not going to work. We're not going to get anywhere with that, that sort of approach. So making sure you know, local context is understood and local voices are heard.   33:33 And finally, to wrap up this episode, I wanted to share a clip of a conversation that I had with two really brilliant folks from EY Global, Greg Guthridge and Nicholas Hancock. I talked to Greg and Nicholas about the fact that, in the end, we are all end users of energy. It doesn't matter what your role is in the energy transition. It doesn't matter where you live, where you work. We're all consumers of energy, and we all need to live in homes that have heat and cooling in some places of world. We work in buildings that are like that. We need to get around and charge our devices and cook and so we all have a stake in this. We all have a role in this. And my conversation with Greg and Nicholas really talked about their work in helping industry and businesses navigate this energy transition and inspire and influence action amongst all kinds of consumers, because not everybody approaches the energy transition in the same way. And it was really great to chat with Nicholas and Greg about how they see the approach to the energy transition with consumers.   Greg Guthridge  34:41 Yeah, Trevor, I'm glad you brought up the word customer, because we use that word as kind of an overarching term. And let me maybe, if you don't mind, I'll dive in a little bit more on that, because customer is actually, you know, I'll use it on occasion, but it's actually a bit of an old fashioned. And believe it or not, it's that we try to use the term consumer, or, even better, omisumer, when we talk about the participants in the energy experience moving forward. And we're picking these words carefully, because customer kind of implies a one way interaction. Consumer implies that you're dealing with a customer or a participant that's, that's two way that's engaging, you know, in a much more active capacity. And then you get into omnisumer, which is the, what we believe, really the consumer of the future. These are participants that are, you know, multi channel, Multi Product, multi provider, a many to many kind of experience. So you'll hear me use them all interchangeably. But really, what we're trying to convey is that, you know, the good old days of somebody at the end of the value chain just receiving a bill for our energy that they take for granted is disappearing. Now to your actual question, you know, around, you know, the different strata of consumers. We do think of it in terms of, there's residential customers, you know, the mass market, the people at home, and then we have a number of other sort of major categories that, that we think about. There's small and medium businesses, large, commercial and industrial. There's a category which we call mush, which is municipal and universities and schools and hospitals. And then there's a, you know, kind of new categories of consumers that are forming a peer to peer and prosumer, type of consumers that that are trading energy, you know, they've got, they might have electric vehicles or solar or storage, and they're not just consuming electricity for their own benefit. They're actually selling it back into the grid or to others, and, you know, becoming more of a business partner along the way. So the takeaway here is that what used to be a passive, one way customer experience is now leaning into a much more two way, engaged and much more complex consumer experience between the energy provider and their participants. Trevor, I'm going to start the response to this, and then I'm going to hand it over to Nicholas Hancock, who leads our research, to give a bit more of a some color commentary on how we structured our research, but to start up with we, we really, you know, make about four or five years ago, we started to really think about the supply and demand of the energy transition, and a lot of focus around the world is on the supply side, building the infrastructure, building in New, renewable and green and sustainable sources, getting all of the technology to get, you know, cleaner power from one place to another, from an engineering perspective, and what we really started to realize is that as part of the energy transition, if you think about it, we're trying to do a generation of change in just a couple of decades. And on the demand side of this equation, we've got a bunch of very complex consumers, consumers that you know interact and behave irrationally with different behaviors. Some will be very excited about the energy transition. Others will be very reticent, and everything in between. And so in order for the energy transition to accelerate and to achieve the benefits that we're all looking for, we need to find a way to engage the consumer in ways which, frankly, are going to really push the envelope with consumers. So we started our research program, and Nicholas Hancock, who's on, has been leading the charge. And I think Nick, if you don't mind, can you give us a quick overview on the global nature of the research and how we've approached it?   Nicholas Handcock  39:00 Yeah, absolutely. So we started our research program about three years ago, really trying to take a global view mixing regions that are both, some of them really leading out there on the front edges of the energy transition. So we've got countries, for example, like Sweden, that are, you know, kind of further down the path as well as, you know, North America, which is, I would say, a little bit more in the middle. And then we've got some countries that are maybe lagging or taking their own paths in the energy transition, we've included countries like China, Singapore. We included Indonesia last year. So really, a global view of what are consumers kind of thinking in terms of how they approach the energy transition, what sort of products and services are they interested in, and what are the values and preferences that they bring to it when it comes to their energy providers, but also a broader ecosystem of providers that we see emerging out there, you know, who are they really interested in turning to when it comes to advice, when it comes to learning about solutions, purchasing them, and even things like, for example, control over solutions in the home, which when it comes to energy flexibility in the future is really important. We've been exploring how to different consumers approach and feel about this. And so what we did is we developed a survey. We're entering our fourth year of doing that. Now we work with a third party to do those surveys online across the globe. So it is sort of an independent third party that helps us to perform those and then we take those results back and take a look at what we see. And to your point, Trevor around, sort of the voices of the transition we've been looking at, how do some of those different groups break out? What are the different values of different aspects of those consumers out there? Because even sitting around the dinner table, I'm sure everybody can feel we don't all have the same opinions when it comes to energy, and even more so, when it starts to come to things like changes to your home or changes to your vehicles. So that's really what we've been exploring for the last number of years.  You know, what we did is, having looked at all these different markets, we found some pretty interesting similarities and the percentages of the population that fit into these five categories. It varies quite significantly, market by market, country by country, geography by geography, but there is some there's a way for us to more simply think about a incredibly complex, fragmented, distributed customer base, residential, mass market customer base into what we think are really simply five different categories. And we the organization of these five categories. We've thought about them from a behavioral perspective, from a value from you know, what's their interests and how do they plan to engage? And sort of in sequence here, I'll talk about the five, and I'll put them in the order of from most active to least active. I'll describe each of these. And the key thing here to keep in mind is that there's no wrong place to be as a residential customer, and you can actually flip around. You can move from one place to another almost overnight. So it's quite a fluid approach here. But the first category is what we call the energy champions. They're the savvy customers. They're actually the customers that have been the first to move and the ones that we see in the news already, they're probably already using new energy products and services in their home. They might have solar on the roof. They could potentially have storage. They might already be using an electric vehicle. We make fun of this category a little bit. They're usually the ones that pre order their iPhone. They might already have a have a have an interest in the new Tesla truck or some other, you know, device. They're absolutely the innovators. They're the early movers, and they're interested in spending time researching. They're going to pay attention to where their energy source is coming from, and they're going to be quite active. So those are the energy champions, the next category is what we call the energy enthusiasts, and this is actually the one that that we have to pay the most attention to. They're the fast followers. They're the energy conscious category. And when they when they observe what the champions are doing, and when they get a bit more comfortable and they start to move, a actually will influence the whole market, and as the enthusiasts maybe slightly a little more cautious, but they're also, you know, the fast followers. So once they can see the value proposition, once they're convinced that the technologies and capabilities are for real, then they're going to move. They may not pre order their iPhone, but they're probably pretty close in terms of thinking about how they're going to advance into the energy market. The next category is the novice category, or the agnostics. And what's interesting about these this category is this segment of customers or consumers. They they're actually, you know, pretty passive. They can see the value proposition. They can see that there's, there's a lot of people taking interest in it. But for a number of different reasons, they're not moving. They're very novice, they're very they're very agnostic, and it's because they're starting to think about other things like, well, all right, I can see that I can save money, or I can do something that will improve the environment, but it's just going to take too much time, or I have other priorities or whatever. So as a as an industry, we need to find a way to kind of activate and excite. We need to make it as effortless and frictionless as possible for this category of consumers to move, and they will move, and they will do things, but they're just influenced by a whole lot of other variables that, that you know, that, that they believe are a higher priority. The fourth category is what we call the bystanders, or the skeptics, and they are the ones that are a little bit they're not, they're bit mistrusting, frankly, of the messaging around the energy transition, around sustainability or environmental and they're probably going to take a fairly skeptic approach to, is this for real? Is it really going to provide me benefit? Is it really going to, you know, advance my personal capabilities. And so what's interesting about this group is they're, they're actually very interested in new energy products and services, but for different reasons. So they're going to want, you know, more control. They're going to want, maybe, off grid capabilities. And so they they're actually as interested as the others, but the way you approach them is going to be very, very different. And the final category are the allies. And this is a, you know, energy is a household necessity, and this category is very dependent. There they might have, you know, income challenges. They might have other challenges that that that we have to look after. It is a critical household service that we provide, and we need to make sure that we look after, you know, the low income, the vulnerable, the medical dependencies that you find in the allies or the dependent category. So the range of consumers across these five will vary. We've got a great little quiz that you can take out there on ey.com or you can go out and answer some questions, and it'll tell you which kind of consumer you are today. But it's yeah, we see that most consumers will fit into one of these five categories and then move from there, depending on what's happening in their life experiences.   Trevor Freeman  46:49 Okay, well, there you have it. I hope that those clips give you a sense of some of the different aspects of the energy transition, what it is and how it impacts all of us. I really encourage you, if you haven't already listened to those, to go back and have a listen to those and other episodes from this year. I think it's been a great year of great conversations, and what I hope comes through, not just the conversations you've heard today, but all the episodes that we have is this idea that there is hope, and that may be kind of a funny thing to hear, but oftentimes, when we're hearing about climate change and the energy transition and the challenges that we face, it can be discouraging, but there's some really great and interesting things happening and some innovation that's happening. And as someone that works in this space, I think it's really important to be aware of the context that we work in, but also be optimistic and to focus on the really cool and great things that we're doing. And I think that that goes across most, if not all, of the guests that I've had on the show this year, really hearing their passion and their hope for what is to come. So have a listen. Take some hope from that as you relax over the holidays, as we round out this year. So as we round out this episode, I do want to give another thanks to all the guests that we've had on the show this year. We certainly couldn't do this without the fantastic and amazing people that we bring on to chat with. Goodness knows, you don't want to just hear me ramble on episode after episode. So really appreciate people taking the time to come and share their thoughts and insights with us. I also want to say a huge thank you to the team that is behind pulling these episodes together. This is a multi person contribution with folks across Hydro Ottawa and our partners that help us pull this together. And I want to especially call out Morgan Barnes for his help and really pulling the content and the feel and the texts together behind these episodes. It's me rambling here behind the microphone, but really Morgan and I work together to pull together what the theme and the thread of these episodes are so big. Thanks to Morgan for his thought leadership in this and his dedication and hard work and helping pull these things together. Morgan, you're the best. Okay, so with that, my team is always kind of after me to answer these rapid fire questions that you often hear at the end of episodes, but they also gave me an out because, because I don't intend to do that. So the out is what is one of my favorite holiday traditions. So I'm going to, I'm gonna pivot and pull that one as we go into the holiday season here. And I think I was reflecting on this, I think one of my favorite holiday traditions, at least in the last little while, as I've kind of built a family and have a growing family, is going and getting that Christmas tree. And a number of years ago now, we moved houses, and it's not a big house. It's a house in Ottawa here, but we have this small part of the back of our house that has a really high ceiling. And so as we were out the first year, we always go to one of those, cut your own cut your own tree farms and cut our tree down, and I had this idea that, Oh, we've got a really high ceiling, so we've got to get a really tall tree. And that kind of started a bit of a precedent now, where I can't go out and get just a little tree anymore. The kids want, well, I say the kids, it's probably more me, but the kids and I both want the that tall tree that kind of scrapes the ceiling as we put it up. So that's always fun trying to find that right tree, the perfect shape to cut it down, to haul it back to the car, and try not to pull too many muscles doing it. So I'd say that's one of my favorite holiday traditions, and then to sit in the house and have that nice, fresh smelling Christmas tree for at least a few weeks. So, that's my favorite holiday tradition. Thanks for joining us in 2024 we really appreciate you listening. We appreciate the conversations. As always, don't hesitate to reach out to us, Thinkenergy@hydroottawa.com Is our email address. We would love to hear from you, love to hear your ideas and thoughts on topics and guests. So there we are at the end of the year, and we look forward to connecting with you again in 2025 where we will be back with more episodes, more guests, more conversations about energy in the energy transition. Thanks so much for listening.  Thanks for tuning in to another episode of The Thinkenergy podcast. Don't forget to subscribe wherever you listen to podcasts, and it would be great if you could leave us a review. It really helps to spread the word. As always, we would love to hear from you, whether it's feedback comments or an idea for a show or a guest. You can always reach us at thinkenergy@hydroottawa.com  

OnWriting: A Podcast of the WGA East
Episode 121: Peter Straughan & Zach Baylin

OnWriting: A Podcast of the WGA East

Play Episode Listen Later Dec 18, 2024 49:29


Screenwriters Peter Straughan (Conclave) and Zack Baylin (The Order) discuss their latest projects and previous work, their process, and much more. Peter Straughan is a writer and playwright. His most recent screenplay is the 2024 film Conclave. Before Conclave, Peter's screenwriting credits have include The Goldfinch, Our Brand is Crisis, Frank and Tinker Tailor Soldier Spy, the latter of which received several accolades including a 2011 Academy Award nomination for Best Original Screenplay. In addition, he wrote the 2015 television adaptation of Wolf Hall, which earned him a nomination for the Primetime Emmy Award for Outstanding Writing for a Miniseries, Movie or a Dramatic Special. Zach Baylin is a writer whose 2024 credits include The Order and Bob Marley: One Love. His other credits include Gran Turismo, Creed III and King Richard, the last of which earned him an Academy Award nomination for Best Original Screenplay. --- Read shownotes, transcripts, and other member interviews: www.onwriting.org/ Follow the Guild on social media: Twitter: @OnWritingWGAE | @WGAEast Facebook: /WGAEast Instagram: @WGAEast  

Lead to Soar
Money, Leadership, and Potential: Why Women Must Close Their Gender Gap with Lizzy Goldfinch

Lead to Soar

Play Episode Listen Later Dec 16, 2024 43:27


Michelle Redfern sits down with Lizzy Goldfinch to explore the crucial intersection of money, leadership, and women's potential. Lizzy shares her professional expertise and personal experiences to illuminate how financial literacy and independence are foundational for closing the leadership gender gap. The conversation focuses on breaking down harmful money myths, embracing financial empowerment, and taking actionable steps to build wealth and create choices.Whether you're navigating your career, seeking financial freedom, or dreaming of a larger impact in your community, this episode will inspire you to take control of your financial future.Episode HighlightsThe “3 Ds” of Financial Vulnerability:• Death, Divorce, and Domestic Violence often leave women in precarious financial positions.• Women aged 55+ are the fastest-growing demographic of homeless people in Australia, emphasizing the need for financial independence.Why Money Matters:• Financial resources give women the power of choice and freedom, enabling them to escape abusive situations or transition careers without fear.• By 2034, women are projected to control 65% of Australia's wealth, making financial literacy more crucial than ever.Reframing Harmful Money Mindsets:• Common limiting beliefs:• “I'm not good with money.”• “Money is vulgar or shameful to talk about.”• Empowering reframes:• “I'm learning to get better with money.”• “I love money and the choices it brings.”Investing in Yourself:• Warren Buffett's advice: The best investment you can make is in yourself.• Tips for self-investment: podcasts, networking, online courses, and continuous learning.The Power of Choice Through Money:• Financial independence allows women to:• Volunteer and engage in meaningful causes.• Leave unsupportive workplaces or unhealthy relationships.Practical Tips to Build Wealth:• Create a “Runaway Fund” for emergencies or unexpected life transitions.• Start small with consistent saving and investing—compounding interest is a game changer.• Keep a record of your accomplishments to ensure fair remuneration at work.Leadership Call to Action1. Examine Your Money Mindset:Identify any limiting beliefs you have about money. Write them down and consciously work to reframe them into empowering affirmations.2. Start Building Wealth Today:Open a savings account and set up an automated transfer—even $5 per paycheck can snowball over time.3. Prepare for Your Next Salary Negotiation:Document your accomplishments and contributions to the business. Set a target remuneration goal and plan your negotiation strategy now.4. Get Educated:Listen to podcasts like She's on the Money, read books like The Barefoot Investor by Scott Pape, or follow Lizzie Goldfinch for actionable financial insights.5. Support Your Community:Use your financial resources or time to give back in meaningful ways—whether by mentoring, volunteering, or donating to causes you care about.Connect with Lizzy Goldfinch:Resources Mentioned in the Episode:• The Barefoot Investor by Scott Pape• The Millionaire Next Door by Thomas J. Stanley and William D. Danko• PepTalkHer App by Meggie Palmer • The Snowball - Warren BuffetMake sure to subscribe to Lead to Soar on your favourite podcast platform and share this episode with women in your network who are ready to take control of their financial futures. Hosted on Acast. See acast.com/privacy for more information.

Talkin' Birds
#1,012 Nov. 17, 2024

Talkin' Birds

Play Episode Listen Later Nov 17, 2024 30:00


On our latest show: Birding by Ear with Donna Posont; listening to the beautiful Lawrence's Goldfinch; and talkin' turkey with Mike O'Connor.

Girls On Film
Ep 185: The Importance of Women in Leadership at Evolution Mallorca International Film Festival

Girls On Film

Play Episode Listen Later Nov 7, 2024 45:06


In this special episode from the Evolution Mallorca International Film Festival, Anna Smith hosts a panel discussion on women in leadership within the film industry. In the third iteration of this panel, Anna is joined by three international industry leaders: Kirsty Bell, CEO of the Oscar-winning production company Goldfinch, international film publicity strategist Mia Farrell and Oscar-nominated producer Rebecca Pruzan. Together, they explore the pathways to landing leadership roles, the skills needed to thrive once there, and the shifting dynamics of gender and race in the film industry. Kirsty Bell shares her experiences starting Goldfinch as a family business and reflects on directing A Bird Flew In—the first film shot during the pandemic, which pioneered COVID protocols on set. She also highlights the importance of supportive men in her life, and how they've played a role in her career. Mia Farrell offers insights from her career as a publicist, discussing her passion for elevating films through strategic PR, as well as her advocacy for greater diversity in the industry. She tells the audience about the genesis of her impactful Screen Daily article, "Why are so few Black people in positions of power in the arthouse film PR sector?'”. Mia also shares her experiences working on The Dads, a heartfelt documentary on fatherhood and trans youth, directed by Luschina Fisher, and available on Netflix. Rebecca Pruzan discusses her transition from a career in IT and consultancy to becoming an Oscar-nominated producer. She reflects on the identity politics at play in producing her short film IVALU, highlighting the challenges of navigating cultural sensitivities in storytelling and discussing the reaction to the film on its festival journey. This episode is in partnership with Evolution Mallorca International Film Festival. You can find out more about the Evolution Mallorca International Film Festival here: https://www.evolutionfilmfestival.com/tickets A reminder that you can read a transcript of our episodes on Apple Podcasts by clicking the ‘transcript' option in settings in the episode description. Sign up to the Girls On Film newsletter below: http://eepurl.com/iEKaM-/ or email girlsonfilmsocial@gmail.com to be signed up. Become a patron of Girls On Film on Patreon here: www.patreon.com/girlsonfilmpodcast Follow us on socials: www.instagram.com/girlsonfilm_podcast/ www.facebook.com/girlsonfilmpodcast www.x.com/GirlsOnFilm_Pod www.x/annasmithjourno Watch Girls On Film on the BFI's YouTube channel: www.youtube.com/playlist?list=PLX…L89QKZsN5Tgr3vn7z Girls On Film is an HLA production. Host: Anna Smith Executive Producer: Hedda Lornie Archbold Producer: Charlotte Matheson Intern: Anna Swartz Audio editor: Benjamin Cook House band: MX Tyrants © HLA Agency

The Filmmakers Podcast
Conclave: A Screenwriting Superclass with Oscar Nominated and Bafta winning writer Peter Straughn.

The Filmmakers Podcast

Play Episode Listen Later Oct 29, 2024 50:06


We are honored this week as we are joined by Bafta Winning screenwriter Peter Straughn to talk about his latest film Conclave which stars Ralph Fiennes and is out NOW! Peter who is known for writing the feature films Tinker Tailor Soldier Spy, Mrs Radcliffs Revolution, Sixty Six, How to Lose Friends and Alienate People, The Men Who Stare at Goats, Frank, The Snowman and The Goldfinch chats with Pope 'Dom' Francis III and Cardinal 'Giles' Archibald ii about his rather excellent feature Conclave based on the book of the same name. They talk themes and plot, story structure and tone. His humble beginnings to indie film darling to Academy Award nominee. Writing an Oscar level drama contender with comedy and drama thrown into the heady mix. Conclave is in cinemas now. SHORT FILM SHOWCASE Center Frame https://www.centerframe.com/industry-showcase. WATCH our interview with Elizabeth Olsen and Carrie Coon on YouTube here https://www.youtube.com/watch?v=V-TU39BmwLI&t=167s. PODCAST MERCH Get your very own Tees, Hoodies, onset water bottles, mugs and more MERCH. https://my-store-11604768.creator-spring.com/ COURSES Want to learn how to finish your film? Take our POST PRODUCTION COURSE https://cuttingroom.info/post-production-demystified/   PATREON Big thank you to: Serena Gardner Mark Hammett Lee Hutchings Marli J Monroe Karen Newman Want your name in the show notes or some great bonus material on film-making? Join our Patreon for bonus episodes, industry survival guides, and feedback on your film projects!   SUPPORT THE PODCAST Check out our full episode archive on how to make films at TheFilmmakersPodcast.com   CREDITS The Filmmakers Podcast is written, produced and edited by Giles Alderson @gilesalderson Logo and Banner Art by Lois Creative  Theme Music by John J. Harvey Learn more about your ad choices. Visit podcastchoices.com/adchoices

Romantasy Readers
Goldfinch

Romantasy Readers

Play Episode Listen Later Oct 27, 2024 83:04


Join Alley and Nicole as they break down the final book in the Plated Prisoner series, Goldfinch by Raven Kennedy. They talk all about their favorite moments and some of the theories they had that came true. As always, the first 20 minutes are spoiler free.

Trilogy Outdoors
Season 3 Episode 98 The Senator Capt E Rick Austin and Capt Jason Burton Talking Storm Relief and Colorado Elk

Trilogy Outdoors

Play Episode Listen Later Oct 5, 2024 118:34


It is always great to get the Senator in the studio and talk Colorado Elk hunting. We are joined via phone bu Team Trilogy Outdoors pro staffer, Rick Austin. Rick and Sen. Goldfinch just got back from another trip out to the west to chase Elk. Rick was fortunate enough to draw a black powder tag for Elk and was excited to put the stalk on what would hopefully be his first bull. The hunt was exciting and it's always great when the Senator gets to describe his hunts with his hunting partner. We wish Rick's wife, Heidi best of luck in just a few weeks at her first attempt at a bull Elk with rifle. We are also joined this week by Capt. Jason Burton of MIFC. Known as the Flounder Pounder, Jason was busy this week with helping coordinate some assistance and relief efforts for our friends in the WNC area that was ravaged by Hurricane Helene. JAson made the trip up this week wijth one of the trucks that was loaded down with essentials. He shares a lot of things that the media is not gonna share and he shares the reaction and opinions of the many locals that he was fortunate enough to spend time with on his trip. Please understand that this is not going away anytime soon. These people are going to need help over the coming weeks and months. We are trying to use our platform to get the information and awareness out in every way possible. The incredible outpouring that you have shown so far is mind blowing and we hope to keep up the pace and to continue doing for others, No donation is too small and any time you can volunteer to help organize or sort through the donated items is greatly appreciated. Enjoy!Trilogy Outdoors Media Become a supporter of this podcast: https://www.spreaker.com/podcast/trilogy-outdoors--5441492/support.

Bird Notes
The Goldfinch

Bird Notes

Play Episode Listen Later Aug 19, 2024


Sunlight on the wing

Bird Notes
The Goldfinch

Bird Notes

Play Episode Listen Later Aug 19, 2024


Sunlight on the wing

Present and Sober
Your Sober Rebellion with Sam Goldfinch

Present and Sober

Play Episode Listen Later Jul 30, 2024 45:09


Join Sam and Terri as they dive into the essence of innate health, the power of processing emotions, and the difference between seeking external pleasure and finding internal peace.

Mike, Mike, and Oscar
TIFF & Venice Lineups Expand The Oscars Landscape - ORC 7/23/24

Mike, Mike, and Oscar

Play Episode Listen Later Jul 24, 2024 59:59


A TIFF 2024 RUNDOWN: 1:47 - Previously Announced TIFF Premieres: We get the obligatory Family Guy reference out of the way. Then we discuss how Eden is about hot people on an island, why we need to be right about Nightbitch, and whether We Live In Time is more Brooklyn or Goldfinch. TIFF GALAS & SPECIAL PRESENTATIONS: 4:01 - Opening Night with Nutcrackers from David Gordon Green and that stupid Adam Sandler movie that one of us thinks is great. 6:25 - Closing Night with Rebel Wilson's The Deb 7:48 - From Cannes to TIFF: Anora, Bird, Emilia Perez, Oh Canada, Rumours, & The Shrouds. 9:40 - Better Man via Robbie Williams starts a whole Pearl Jam riff 11:20 - A cryptic Conclave book report. 12:07 - Other big name films we've been following forever like Hard Truths from Mike Leigh, Heretic from A24, Piece by Piece from Lego, The End from the artist formerly known as Michael Shannon, The Fire Inside from MGM, The Return from a ripped Ralph + Will & Harper from Sundance. 15:25 - The Piano Lesson announces the Washington family takeover of Hollywood. NEW TIFF ARRIVALS INTO THE OSCARS LANDSCAPE: 18:07 - 40 Acres has Danielle Deadwyler vs Canadian Cannibals! 19:04 - Familiar stories modernized with Brett Goldstein in All Of You, banshee Barry vs Christopher in Bring Them Down, Sandra Oh in Can I Get a Witness?, a jacked Orlando Bloom in The Cut, plus The Motorcycle Diaries director is ironically back in I'm Still Here. 21:48 - The Last Showgirl has us hoping for a Pam Anderson Oscars campaign 22:58 - Millers in Marriage is the next from Edward Burns that's better than The Family Stone. 24:32 - Relay seems like Hell or High Water or The Accountant. 25:09 - Riff Raff seems funny with Jennifer Coolidge, Bill Murray and Pete Davidson 26:10 - Sharp Corner stars Ben Foster who may or may not walk off into the woods. 27:22 - The Order intrigues us with Hoult, Law, Smollett, Maron, Baylin, etc. 29:10 - The Penguin Lessons is probably not Mr. Popper's Penguins. 29:44 - Pedro Paramo and why his bonafides make it a potential contender 31:13 - Sketch seems like a funny Harold and the Purple Crayon w/ D'Arcy Carden & Tony Hale. 32:27 - Unstoppable and how we hope Jlo gets some good press with this one. 34:14 - Without Blood is the next from Angelina Jolie and Cinecitta Studios. THE 81ST VENICE FILM FESTIVAL: 35:18 - A Recap of the News on the Jury, Tributes, and Opening Night Film 36:19 - Maria and why Angelina Jolie could be a favorite for the Volpi Cup. 37:01 - Queer and why Luca Guadagnino's work promises a high floor 38:15 - Pedro's Room Next Door and how we get sidetracked re: premise writing. 39:43 - Joker Folie à Deux and our mixed review of the second trailer. 43:44 - April and how M2 has faith in this director. 44:38 - The Brutalist and the allure of the mysterious and wealthy client 46:34 - Babygirl and when we momentarily lose our morality. 47:58 - Harvest and the intrigue of good young actors. 48:45 - Wolfs and the callback that also includes your obligatory Tarantino reference. 49:28 - Baby Invasion and why it has to be Harmony Korine. 50:12 - Intriguing Venice Docs and why M2 owes you some doc reviews. THE NYFF OPENING NIGHT FILM ANNOUNCEMENT - NICKEL BOYS 51:00 - Aunjanue Ellis as a Supporting Actress play, the history of previous Opening Nighters from NYFF at the Oscars, and another major contender centered on a child protagonist. (Plus, we get a quick happy story from Uncle Mike). 55:25 - OUR OUTRO and how to contact us. Plus, you hear about a handful of upcoming episodes, get some Austin Powers quotes, and a discussion on how we would do Telluride if we ever go.

Bad at Sports
Bad at Sports Episode 873: Leslie Baum, Andreas Fischer, Justin Witte - Panel Discussion at Goldfinch

Bad at Sports

Play Episode Listen Later Jun 17, 2024 64:29


In this captivating episode of Bad at Sports Podcast, we bring you a special recording of a panel discussion held at the Goldfinch Gallery. The event, which saw an enthusiastic turnout with every painter in Chicago turning out for this standing room only discussion, delves into the intricate world of painting. "Team Contemporary Painting" panelists, Leslie Baum and Andreas Fischer, share their insights on a range of topics, including painting techniques, materials, subjects, and the generation of content. They also explore the dynamics of collaboration among painters, shedding light on how artists work together and maintain their individuality in the art world. Leslie Baum, a distinguished painter known for her vibrant and dynamic works, joins Andreas Fischer, whose thought-provoking pieces have garnered critical acclaim, in a conversation moderated by Justin Witte. Justin, the Curator and Director at the Cleve Carney Museum of Art, guides the discussion with his profound understanding of contemporary art and its evolving https://www.atthemac.org/cleve-carney-museum-of-art/ https://goldfinch-gallery.com/ https://lesliebaum.net/ https://goldfinch-gallery.com/artists/66-andreas-fischer/overview/   Image Andreas Fischer, "Grandma is Mountians", 2024 

Toxic Workplace
Antidote Episode: Free Yourself From Your Toxic Workplace Story - Interview with Alisha Wolf of Goldfinch Wellness

Toxic Workplace

Play Episode Listen Later Jun 13, 2024 52:53


Alisha Wolf is a psychotherapist and coach who has developed a niche expertise in understanding and healing from toxic workplaces using self-regulation, self-compassion, and the creation of a coherent narrative. Alisha has been helping her clients navigate out of toxic workplaces and over the years she's gained incredible insight on how we build our stories around toxic workplaces, how we can get stuck in that narrative and she's here to help us get unstuck. In this interview, Alisha talks about the role of righteous anger, how we can get stuck in "thought court" against our toxic workplaces, and the role of radical self-acceptance. To learn more about Alisha and the work she offers, visit GoldfinchWellness.com

0xResearch
The MakerDAO-Aave Debacle: Risks and Rewards of Collateralizing DAI | Analyst Round Table

0xResearch

Play Episode Listen Later Apr 10, 2024 58:41


In this episode, the Blockworks research team examines Solana's transaction failures and the network's ongoing struggles to handle high transaction volumes. The conversation then turns to the MakerDAO and Aave debacle, debating the risks and merits of MakerDAO's decision to collateralize DAI with Ethena's stablecoin. They also touch on the recent default on Goldfinch and the broader challenges facing on-chain private credit markets. Finally, they explore the rise of Pump.Fun and the resurgence of meme coins. As always remember this podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. - - Follow Brick: https://twitter.com/0x___Brick Follow Matt: https://twitter.com/MattFiebach Follow Sam: https://twitter.com/swmartin19 Follow Ren: https://twitter.com/purplepill3m Follow Blockworks Research: https://twitter.com/blockworksres Subscribe on YouTube: https://bit.ly/3foDS38 Subscribe on Apple: https://apple.co/3SNhUEt Subscribe on Spotify: https://spoti.fi/3NlP1hA Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ - - Join us at DAS (Digital Asset Summit) in London this March! DAS is the #1 institutional conference in crypto, hosted by Blockworks. Use the link below to learn more, and use 0X10 to get 10% off your ticket! Sign up now because the price goes up every month. See you there! Learn more + get your ticket here: https://blockworks.co/event/digital-asset-summit-2024-london/home - - https://x.com/_nishil_/status/1777048334664081548 - - Timestamps: (0:00) Introduction (1:08) Enabling Arbitrum Stylus (5:08) Velodrome DEX on Bitcoin (7:40) Solana & SUI Airdrops (13:39) dYdX Permissionless Smart Contracts (18:15) dYdX Segment: Axelar (22:53) Solana Transaction Failures (31:57) Aave & MakerDAO Debacle (43:37) Goldfinch Default (51:43) Pump.Fun Meme Infrastructure - - Check out Blockworks Research today! Research, data, governance, tokenomics, and models – now, all in one place Blockworks Research: https://www.blockworksresearch.com/ Free Daily Newsletter: https://blockworks.co/newsletter - - Disclaimer: Nothing said on 0xResearch is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Dan, Sam, and our guests may hold positions in the companies, funds, or projects discussed.

Bourbon in The Back Room
Conservation, Finding Billions of Dollars, and the Future of Infrastructure in SC - With Republican Senator Stephen Goldfinch

Bourbon in The Back Room

Play Episode Listen Later Mar 22, 2024 60:56


There is ALOT happening in South Carolina government and politics! Where did the mystery 1.8 billion dollars come from? What's Constitutional Carry? How is S.C. faring in National Politics? What is the Yankee Tax?Vincent and Joel sit down with guest, Senator Stephen Goldfinch, to discuss the outdoors, South Carolina Wildlife, preservation, Senator Goldfinch's family and background, the NEW judicial screening process, news about Stephen Goldfinch's statewide office future, and how he's paving a path to lead S.C.  into an environmentally friendly future, and so much more!Get your latest Statehouse update and hear firsthand the rationale behind some of the legislature's most controversial bills. Join Senators Sheheen and Lourie in this week's episode where they take a deeper look at upcoming legislation and lawmakers' actions in S.C. Support the showKeep up to Date with BITBR: Twitter.com/BITBRpodcastFacebook.com/BITBRpodcasthttps://bourboninthebackroom.buzzsprout.com

Trilogy Outdoors
Season 3 Episode 81 Senator Chip Campsen, Chairman of Senate Fish, Game, and Forestry Committee

Trilogy Outdoors

Play Episode Listen Later Mar 12, 2024 99:24


We are very fortunate in South Carolina to have our legislators at the forefront of our resources and the decisions that are made pertaining to them. In both the House and Senate we have the Fish,Game, and Forestry Committee's that is made up of our representatives and senators. Most of these are outdoorsmen and women just like their costituents and they care about the impacts that all of these have in our state. We are fortunate to have our very own Sen. Goldfinch serving on this committee and we are also fortunate that Sen. Chip Campsen(District 43) is the chairman of this committee in the senate. This week we have Sen. Campsen join us to talk about several pressing issues here in South Carolina. At the top of that list is the fact that our timber business in South Carolina is in a decline and the impoortance of this to our states economy is one that may surprise most. We dive into the issues and we also discuss in detail the possible resolutions that have been brought to the attention of the statehouse and the committee's. We hope that our men and women in the timber and pulp wood busness in our state know that there are people working to help save your future and to help ensure this sector of our state economy continues to thrive and grow, Enjoy!!! Become a supporter of this podcast: https://www.spreaker.com/podcast/trilogy-outdoors--5441492/support.

Iowa Everywhere
CW Pod: Breaking in the new year with health tips from Goldfinch Athletics

Iowa Everywhere

Play Episode Listen Later Jan 9, 2024 55:23


Chris Williams welcomes Jeff Woody and Shawn Nuetzman of Goldfinch Athletics to discuss getting into shape, where to start with New Year's resolutions, and the nutrition behind it all. Learn more about your ad choices. Visit megaphone.fm/adchoices

Little Known Facts with Ilana Levine
Episode 379 - Denis O'Hare

Little Known Facts with Ilana Levine

Play Episode Listen Later Dec 4, 2023 36:45


DENIS O'HARE has been nominated 3 times for Emmy Awards for his work in THIS IS US and AMERICAN HORROR STORY. Other television appearances include THE NEVERS, TRYING, TRUE BLOOD, AMERICAN GODS, THE GOOD WIFE, and BIG LITTLE LIES. He won the Tony Award for Richard Greenberg's TAKE ME OUT (Obie Award, Drama Desk Award) and an OBIE for his performance in AN ILIAD of which he is also the co-writer. Other stage credits include ASSASSINS (Tony nomination), SWEET CHARITY (Drama Desk Award), CABARET, INHERIT THE WIND, MAJOR BARBARA, ELLING, RACING DEMON, HAUPTMANN, INTO THE WOODS, TEN UNKNOWNS, and TARTUFFE at London's National Theatre. Film credits include INFINITE STORM, SWALLOW, LATE NIGHT, THE GOLDFINCH, NOVITIATE, THE NORMAL HEART, DALLAS BUYERS CLUB, THE PROPOSAL, DUPLICITY, MILK , CHANGELING, CHARLIE WILSON'S WAR, MICHAEL CLAYTON, A MIGHTY HEART, HALF NELSON, GARDEN STATE, 21 GRAMS, THE ANNIVERSARY PARTY, PRIVATE LIFE, and THE PARTING GLASS of which he is the screenwriter. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Team Deakins
KELLEY DIXON - Editor

Team Deakins

Play Episode Listen Later Oct 18, 2023 66:22 Very Popular


On this episode of the Team Deakins Podcast, editor Kelley Dixon (OBI-WAN KENOBI, THE GOLDFINCH, BREAKING BAD) joins us for a conversation about all things editing. Kelley starts by sharing her experience working in the MGM/UA mailroom and delivering script pages to the editorial department where she quickly found herself helping cut scenes. Kelley later describes learning the potential applications of visual effects if money is no object and how that insight expanded her way of thinking when approaching editorial problems on projects with relatively larger budgets. We discuss how editing can be used to bring the audience into the state-of-mind of a character and how Kelley works with a director to realize their vision in the edit. Throughout the episode, we debate whether or not editing is an invisible art, and we contemplate our favourite note: “Can you just make it better?” - This episode is sponsored by Fiilex Instagram: @fiilexled

BirdNote
Birds Love Sunflowers

BirdNote

Play Episode Listen Later Aug 30, 2023 1:39


Found throughout North America, the common sunflower can grow up to ten feet high, towering over other herbs and grasses. And that's only half the story: their roots can reach just as deep in the soil. They're rugged, adaptable plants that bring beauty — and food — to the ecosystem. Planting sunflowers in a public green space or a backyard can benefit pollinator insects as well as finches and other birds that seek out their seeds, which often last well into the winter.More info and transcript at BirdNote.org. Want more BirdNote? Subscribe to our weekly newsletter. Sign up for BirdNote+ to get ad-free listening and other perks. BirdNote is a nonprofit. Your tax-deductible gift makes these shows possible.