Physics; change in direction of a wave
POPULARITY
Loaded Radio Podcast: Greg Burgess of Allegaeon Discusses 'The Ossuary Lens' & Warfield Talks Thrash Metal TL;DR On this episode of the Loaded Radio Podcast, Scott Penfold sits down with Greg Burgess of Allegaeon to discuss the band's latest album, “The Ossuary Lens”, which was released on April 4 via Metal Blade Records. This marks the band's first new music featuring original vocalist Ezra Haynes since his departure in 2015. Also featured is a conversation with German thrash metal trio Warfield, who recently released their new album, “With The Old Breed” via Napalm Records. Allegaeon Returns with 'The Ossuary Lens' Technical death metal powerhouse Allegaeon has unleashed their latest full-length album, “The Ossuary Lens,” released on April 4 via Metal Blade Records. This album marks a significant moment for the band, as it is their first release with original vocalist Ezra Haynes since his 2015 departure following the “Elements Of The Infinite” album. The return of Haynes brings back Allegaeon's brutal and technically dazzling sound, which fans have been eagerly awaiting. The band's latest single, “Driftwood,” serves as a powerful introduction to the new record, with its unique blend of melodic, technical death metal — a sound Haynes describes as “melotech.” “We're just so happy to finally release new music,” said Greg Burgess. “We're working on a full album's worth of material, but I feel like it's gonna drop kind of in chunks and then hopefully the rest of it all at once kind of thing.” Recording once again with producer Dave Otero at Flatline Audio studio in Denver, Allegaeon continues their 17-year relationship with the producer. “Dave always provides a comfortable working environment, amazing ideas, and a career-spanning understanding of what has made Allegaeon, Allegaeon,” added Burgess. The album's overarching theme focuses on various perspectives of death, with each track exploring a unique viewpoint. Haynes explained, “Each song essentially is a different topic, however there is always a different perspective of death tied to each subject.” Allegaeon's Latest Album - 'The Ossuary Lens' Track Listing: 01. Refraction 02. Chaos Theory 03. Driftwood 04. Dies Irae 05. The Swarm 06. Carried By Delusion 07. Dark Matter Dynamics 08. Imperial 09. Wake Circling Above 10. Scythe Warfield - German Thrash Metal Titans Alongside the conversation with Greg Burgess, the Loaded Radio Podcast also features an interview with German thrash metal band Warfield. Their latest album, “With The Old Breed,” was released via Napalm Records and continues their tradition of relentless thrash metal inspired by bands like Slayer, Sodom, Kreator, and the Bay Area Thrash scene. Warfield, consisting of Johannes Clemens (Vocals & Bass), Matthias Clemens (Guitar), and Dominik Marx (Drums), is known for their aggressive approach to social and political themes, which they express through their brutal and fast-paced sound. Their most recent album follows their 2018 debut, “Wrecking Command,” which was released via Metal on Met
El próximo 28 de marzo, Daniel Núñez presentará por primera vez en Guadalajara su álbum Refraction en LARVA - Laboratorio de Arte Variedades. Un concierto que explorará las texturas del ambient y la música electrónica, acompañado de una propuesta visual cuidadosamente diseñada.Estrenan "Un sueño que resuena", fábula de folklor creada para niñas y niños, montaje nuevo y multidisciplinario . Se presenta los domingos 23 y 30 marzo; 6 y 13 abril, 12:00 horas, en el Teatro Experimental de Jalisco. Conducción: Sofía Solorzano y Juan Pablo Balcells. Producción: Armando Tiburcio.Sistema Jalisciense de Radio y Televisión.Escucha la música del día dando clic aquíVisita: www.jaliscoradio.comFecha: 24 de Marzo del 2025
In our final episode, the Guiding Lights fight what should be their greatest battle yet. Thank you to Bookwyrm Games for sponsoring the channel! Visit them at https://bookwyrmgames.com and use code DORKTALES to save 15% off your order! === Kelly Clark as Dungeon Master Cast Amy Godfrey as Lyric Christa Mitchell as Carmilla Alizarin Christine Rattray as Lady Ellasandra Chris Ross as Sindri Caitlan Vinkle as Anthea Briarfoot And Robin Holford as Zai'Rar Watch us LIVE on Twitch ► https://twitch.tv/dorktales Visit our website ► https://dorktales.ca Our Linktree ► https://linktr.ee/dorktales Join our Discord ► https://discord.gg/zVtE9Ab Follow our Twitter ► https://twitter.com/dork_tales/ Follow our Instagram ► https://instagram.com/dorktaleschannel/ Find us on Facebook ► https://www.facebook.com/dorktaleschannel/ Listen to our Podcast ► https://dorktales.podbean.com Support the show on Patreon ► https://www.patreon.com/dorktales/ Buy the cast a coffee ► https://ko-fi.com/dorktales Buy official Dork Tales Merch ► https://teepublic.com/user/dorktales ► https://dorktalesstore.redbubble.com! So smash the bell, share these videos, and we'll see you soon at our next game! === Music credits: The following music was used for this media project: Music: Only Teeth Remain by Tim Kulig Free download: https://filmmusic.io/song/11095-only-teeth-remain Licensed under CC BY 4.0: https://filmmusic.io/standard-license Artist website: https://timkulig.com/albums Tracks from Monument Studios Distorted Reality Mindstream Death Braams Old Gods Conflicted Paladin Choir Chosen by the Gods Licensed under the All-In-One Pack or Fantasy Complete 1 & 2 https://www.monumentstudios.net Tracks from Dark Fantasy Studio What Lies Beneath Licensed under a Premium License http://www.darkfantasystudio.com Tracks from Joel Steudler Lonely Mountain Licensed under a Humble Bundle Collection It also includes the following licensed music from Ovani Sound Stepped Into Their Trap For background ambiance we used sounds from Tabletop Audio! The Path of the Goblin King Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/b... It also includes the following licensed music from Game Dev Market: Long Path Ahead Impossible Encounter Star Crossed Lovers Like what you heard? For background ambiance we used sounds from Tabletop Audio! Tabletop Audio is a site with a full toolkit of songs, special effects, and soundboards to bring your adventures to life! The composer, Tim, hosts the site for free, so give it a try and if you have a few spare bucks, definitely donate: the quality of his work is staggering. https://www.tabletopaudio.com #dungeonsanddragons #dnd #dorktales #dnd5e #actualplay #tabletop #ttrpg #rpg #liveplay #5E #dragonlance #wizardsofthecoast #dndcosplay #d20 #lgbtqa #actualplayrpg
In this episode, the hosts discuss various topics related to optometry, including the impact of recent fires on their community, the transition to refraction-only practices, and the importance of patient education regarding eye health. They explore the evolving standard of care in optometry, the role of technology like OCT and Optos, and the challenges faced […]
Dans cet épisode spécial, John est en direct des bureaux de Ledger à Paris pour couvrir le Ledger 106 Experience, un événement organisé en collaboration avec Refraction DAO à la veille de NFT Paris.
Space Nuts Episode 494: Radiation Around Jupiter, Light Refraction, and Brown DwarfsJoin Andrew Dunkley and Professor Jonti Horner in this thought-provoking Q&A edition of Space Nuts, where they tackle a variety of intriguing questions from our listeners. From the complexities of radiation surrounding Jupiter to the effects of light refraction in space, and the mysteries of brown dwarfs, this episode is packed with insights that will expand your understanding of the cosmos.Episode Highlights:- Radiation Around Jupiter: Fenton from Minnesota dives deep into the types of radiation emitted by Jupiter and the charged particles from its volcanic moon Io. Jonti explains the implications for spacecraft navigating this hazardous environment and how these particles interact with Jupiter's magnetic field.- Light Refraction and Redshift: Kerry from Mount Gambier wonders about the impact of gas clouds on light refraction and redshift. Jonti clarifies how light behaves when passing through these clouds and reassures listeners that the redshift measurements remain largely unaffected.- Brown Dwarfs and Binary Systems: Nigel from Brisbane asks whether binary brown dwarfs are destined to collide. Jonti discusses the dynamics of binary systems and the various factors that could lead to such an event, while also exploring the potential for merging to create a star.- Marsquakes and Planetary Structure: Buddy poses a fascinating question about the origins of marsquakes and whether Mars could eventually break apart. Jonti unpacks the geological processes at play on Mars and the role of Jupiter in shaping the asteroid belt.For more Space Nuts, including our continually updating newsfeed and to listen to all our episodes, visit our website. Follow us on social media at SpaceNutsPod on Facebook, X, YouTube Music Music, Tumblr, Instagram, and TikTok. We love engaging with our community, so be sure to drop us a message or comment on your favorite platform.If you'd like to help support Space Nuts and join our growing family of insiders for commercial-free episodes and more, visit spacenutspodcast.com/aboutStay curious, keep looking up, and join us next time for more stellar insights and cosmic wonders. Until then, clear skies and happy stargazing.00:00 - Introduction to the episode and topics02:15 - Discussion on radiation around Jupiter and its implications10:30 - Light refraction and its impact on redshift18:00 - Insights into binary brown dwarfs and potential collisions26:45 - Marsquakes and the internal structure of Mars30:00 - Closing thoughts and listener engagement✍️ Episode ReferencesJupiter's Magnetospherehttps://en.wikipedia.org/wiki/Magnetosphere_of_JupiterMarsquakes Researchhttps://mars.nasa.gov/marsquake/ Brown Dwarfs and Binary Systemshttps://en.wikipedia.org/wiki/Brown_dwarfBecome a supporter of this podcast: https://www.spreaker.com/podcast/space-nuts--2631155/support.
Learn what river happens to be Lake Ontario's primary water source. Find out which Canadian Cities comprise Lake Ontario's Northern & Southern Shorelines. Discover what International River connects Great Lakes to the North Atlantic Ocean. Understand what a tributary represents along with discovering connection between Genesee River and Lake Ontario. Get an insight into some basic trait features involving Charlotte Genesee Lighthouse including type of oil lamp used for lighting purposes. Learn how Charlotte Genesee Lighthouse Keepers modified their circumstances during times of inclement weather. Discover how long the lighthouse stayed in operation including where it stands come present day. Find out firsthand if any lighthouses on Lake Ontario had a style known as Bird-Cage Lantern. Determine if there are any other lighthouses in existence that have Bird-Cage Setup. Learn which river requires ship captains and their crew to navigate through in reaching Great Lakes from Atlantic Ocean. Discover what lighthouse on Lake Ontario marks official entrance into an International River connecting Canada & United States. Figure out what got introduced come start of 1850's from a lightning standpoint. Get a brief introduction behind Fresnel Lenses including what took place come 1854 involving one of Lake Ontario's Lighthouses. Understand differences between Refraction & Reflection. Learn exactly how many different sizes of lighthouse lenses got designed by French Physicist Augustin-Jean Fresnel. Go behind the scenes and learn which lighthouse on Lake Ontario was the first in getting fitted with a Fresnel Lens including its order. Learn if any lighthouses on Lake Ontario still have functioning Fresnel Lens's in present day. Hosted on Acast. See acast.com/privacy for more information.
Please join my mailing list here
In this episode, the hosts discuss various topics related to optometry, including the impact of recent fires on their community, the transition to refraction-only practices, and the importance of patient education regarding eye health. They explore the evolving standard of care in optometry, the role of technology like OCT and Optos, and the challenges faced when patients decline dilation. The conversation emphasizes the need for effective communication and education to ensure patients understand the importance of comprehensive eye care. Resources: LA Fire COA Due Waiver: https://calopto.formstack.com/forms/waiverOptometry Fund for disaster relief: https://www.aoafoundation.org/our-programs/optometrys-fund-for-disaster-relief-(ofdr)?sso=yState Board of Optometry CE Waiver: https://www.optometry.ca.gov/formspubs/ce_exempt.pdf Visit Philanthropy California for vetted disaster relief funds: https://www.philanthropyca.org/2025-california-disaster-response
Another year, another ton of releases to sift through. I could keep adding and subtracting albums for weeks and still not be entirely satisfied with the final list. But here it is - my Favorite Ambient Albums of 2024. As I was creating this part 1 mix, I kept questioning my choices. And then, when the mix was done and I listened to it, I was like "Wow, this turned out really well." I really like how the mix evolves with the first half being a bit more electronic and noisy and the second half leaning more orchestral. Here's the list of my favorite ambient albums of 2024, in alphabetical order: 2fel - stranger flow Adam Wiltzie - Eleven Fugues For Sodium Pentothal [Kranky] Akmuo - Dreamwalker [Space of Variants] Altus - Ultraviolet Alva Noto - Xerrox, Vol.5 [Noton 2024] Benoit Pioulard & Offthesky - Sunder [laaps 2024] Circa Alto - Faint Structures (Whitelabrecs 2024] Civilistjävel! - Brödföda [Felt 2024] Dirk Serries - Streams Of Consciousness Compiled [Projekt] Eli Keszler - LIVE 2 [LuckyMe] Extraworld - Perihelion [Exosphere] Hainbach - The One Who Runs Away Is the Ghost Soundtrack [Seil] Heavenchord - Atmospheres & Soundscapes [Cold Tear] Humble Bee & Offthesky - Here In, Absence [IIKKI] Innesti - Contemplate Jogging House - Live at home Jonas Munk - Mirror Phase [Azure Vista] Maps and Diagrams - if all will be lost [Quiet Details] Maps and Diagrams - Islands [Handstitched] Martin Stürtzer - Lander Modules [Echo Elberfeld] mastroKristo - Passage [Lost Tribe Sound] Max Richter - In A Landscape [Decca] Michael A. Muller - Mirror Music [Deutsche Grammophon] OdNu + Ümlaut - Abandoned Spaces [Audiobulb] Pan American & Kramer - Reverberations of Non-Stop Traffic on Redding Road Polaroid Notes - Quiet Rooms [Whitelabrecs] Sumner James, Robert Chamberlain, Volcano Lazerbeam, Saroon - Dive 1- Refraction [Bathysphere Records] Tewksbury - Floes Volumes I-IV (Tridek] The Green Kingdom - Horizons [The Slow Music Movement] Ulla & Ultrafog - It Means A Lot [Motion Ward] v e n n - XIII William Ryan Fritch - Adhesion [Lost Tribe Sound] And here are links to each album: Stranger Flow | 2fel Eleven Fugues For Sodium Pentothal | Adam Wiltzie Dreamwalker | Akmuo | Space Of Variants Ultraviolet | Altus Alva Noto - Xerrox Vol. 5 | Noton Sunder | Benoît Pioulard & Offthesky | laaps Faint Structures | Circa Alto | Whitelabrecs Brödföda | Civilistjävel! Streams Of Consciousness Compiled | Dirk Serries | Projekt Records LIVE 2 | Eli Keszler Perihelion | Extraworld | Exosphere The One Who Runs Away Is the Ghost Soundtrack | Hainbach Atmospheres & Soundscapes | Heavenchord | Cold Tear Records Here In, Absence | The Humble Bee & Offthesky | IIKKI Contemplate | Innesti Live at Home | Jogging House Mirror Phase | Jonas Munk | Azure Vista if all will be lost | Maps and Diagrams | quiet details Islands | Maps & Diagrams | Handstitched Lander Modules | Martin Stürtzer Passage | mastroKristo | Lost Tribe Sound In A Landscape | Max Richter | Decca Mirror Music | Michael A. Muller | Deutsche Grammophon Abandoned Spaces | OdNu + Ümlaut | Audiobulb Reverberations of Non-Stop Traffic on Redding Road | Pan American & Kramer Quiet Rooms | Polaroid Notes | Whitelabrecs Dive 1: Refraction | Sumner James, Robert Chamberlain, Volcano Lazerbeam, & Saroon | Bathysphere Records Floes: Volumes I-IV | Tewksbury Horizons | The Green Kingdom | The Slow Music Movement Label VIII | v e n n It Means A Lot | Ulla & Ultrafog | Motion Ward Adhesion | William Ryan Fritch | Lost Tribe Sound I hope you enjoy part 1. Part two is coming soon. Followed by a mix of jazz favorites from 2024. What are some of your favorite ambient albums from the past year? Cheers! T R A C K L I S T : 00:00 Pan American / Kramer - Floating Island (Reverberations of Non-Stop Traffic on Redding Road) 04:22 Maps and Diagrams - Hvar (Islands) 07:35 Jogging House - Live at Home pt. 1 (Live at Home) 13:15 Innesti - Astral Secrets (Contemplate) 16:14 Alva Noto - Xerrox Arc (Xerrox, Vol.5) 20:53 Hainbach - End of Work (The One Who Runs Away Is the Ghost Soundtrack) 22:45 Akmuo - Dreamwalker (Dreamwalker) 26:50 Sumner James, Robert Chamberlain, Volcano Lazerbeam, & Saroon - Euphotic (Dive 1: Refraction) 31:36 Benoît Pioulard & Offthesky - Fed On Lilies (Sunder) 35:48 Jonas Munk - Dawn Layer (Mirror Phase) 40:40 Polaroid Notes - The Night Returns Without A Sound (Quiet Rooms) 45:40 The Humble Bee & Offthesky - Space (Here In, Absence) 52:00 William Ryan Fritch - Gravitropic (Adhesion) 60:10 Adam Wiltzie - Mexican Helium (Eleven Fugues For Sodium Pentothal) 62:44 Max Richter - A Time Mirror(Biophony) (In A Landscape) 66:36 mastroKristo - Waves(Federico Mosconi Rework) (Passage) 73:40 Dirk Serries - the whispering scale (Streams Of Consciousness Compiled) 82:45 Tewksbury - Aperture (Floes Volumes I-IV) 94:23 end
REFRACTION wraps up its first year with a bang, presenting the ‘Red Shift' EP by Aeikus and Pedro Capelossi, now reimagined by AGP, Ercos Blanka, Upwellings, and Luís Bravo.
This mix is a continuation of a trend I noticed several years ago - the use of sax or other wind instruments in ambient music. I think I've done 3 other mixes that feature wind instruments. This one came together fairly quickly as there is a lot to choose from. Most of the tracks are 2023 or 2024 with 4 of the 14 a bit older. I really like the flow of this set, one song bleeds right into the next with the beginnings and endings slightly obscured. This also seems to work well as and autumn themed mix. I picture a cold, dreary autumn day like the one in the cover art. Speaking of the cover art, does anyone get the connection between the title and the cover? Links to al the music used in this mix: https://naimasax.bandcamp.com/album/she-was-like-art https://annechrisbakker.bandcamp.com/album/a-sketch-in-leaving https://circaalto.bandcamp.com/album/faint-structures https://www.lottepen.nl/muziek/ https://www.amazon.com/music/player/albums/B01MSUAGFN?_encoding=UTF8&qid=&sr= https://bathysphererecords.bandcamp.com/album/dive-1-refraction https://latenighttales.bandcamp.com/album/late-night-tales-lafur-arnalds https://naimasax.bandcamp.com/album/this-must-be-the-place https://johnalsobennett.bandcamp.com/album/klima https://taylordeupree.bandcamp.com/album/sti-ll-2 https://tapanirinne.bandcamp.com/album/decaying-light https://machinefabriek.bandcamp.com/album/recytle https://icrdistribution.bandcamp.com/album/slowed-life https://wereleasewhateverthefuckwewantrecords.bandcamp.com/album/music-for-a-cosmic-garden Cheers! T R A C K L I S T : 00:00 Naima - nO0NOu (She Was Like Art 2024) 03:30 Anne Chris Bakker - Qanik (A Sketch in Leaving 2022) 05:10 Circo Alto - For Reeds (Faint Structures 2024) 11:02 Lotte Pen - Wanderer (Pelgrim 2022) 14:23 Jon Gibson - Extensions II (In Good Company 2010) 19:52 Sumner James, Robert Chamberlain, Volcano Lazerbeam, Saroon - Bathypalegic (Dive 1: Refraction 2024) 24:24 Sarah Neufeld & Colin Stetson - And Still They Move (LateNightTales 2016) 27:14 Naima - Ectenic Force (This Must Be The Place 2023) 34:47 CV & JAB - Dwelling (Κλίμα(Klima) 2023) 38:27 Taylor Deupree - Snowsand(For Clarinets, VIbraphone, Cello & Percussion) (Sti.ll 2024) 41:30 Juha Mäki-Patola & Tapani Rinne - Decaying Light (Decaying Light 2024) 45:12 Machinefabriek - IV(after Horovitz) (Recytle 2023) 48:22 Jonathan Coleclough, Theo Travis, Jeph Jerman - Slowed Life (Slowed Life 2024) 52:39 Takashi Kokubo & Andrea Esperti - Travelling Through The Stars To Gaia (Music For A Cosmic Garden 2023) 62:25 end
Compositor y director de orquesta francés. Sus maestros, Olivier Messiaen y René Leibowitz, le introducen en la música contemporánea, que él enriquece en su faceta creativa y en la de intérprete. En 1970 funda el IRCAM (Institutde Recherche et Coordination Acoustique/Musique), que dirige hasta 1992._____Has escuchadoNotations IV. Rythmique (1945-1978). Wiener Philharmoniker; Claudio Abbado, director. Deutsche Grammophon (1990)Pli selon pli: portrait de Mallarmé. Don [du poème] (1957-1989). Christine Schäfer, soprano; Ensemble Intercontemporain; Pierre Boulez, director. Deutsche Grammophon (2002)Répons. Introduction (1981-1984). Ensemble Intercontemporain; Pierre Boulez, director. Deutsche Grammophon (1998)Rituel in memoriam Maderna (1975). BBC Symphony Orchestra; Pierre Boulez, director. Sony (1990)_____Selección bibliográficaÁGUILA, Jesús, Le Domaine musical: Pierre Boulez et vingt ans de création contemporaine. Fayard, 1992—,“Entrevista con Pierre Boulez, 1945-2006: ¿Es transmisible la experiencia del serialismo?”. Doce Notas Preliminares, n.º 17 (2006), pp. 10-29*ALBÈRA, Philippe, Pli selon pli de Pierre Boulez: entretien et études. Contrechamps, 2003*BOULEZ, Pierre, Penser la musique aujourd'hui. Gonthier, 1964*—, Hacia una estética musical. Monte Ávila, 1992*—, Puntos de referencia. Gedisa, 2008*—, Escritura del gesto: conversaciones con Cécile Gilly. Gedisa, 2012BOULEZ, Pierre y André Schaeffner, Correspondance: 1954-1970. Fayard, 1998CAMPBELL, Edward, Boulez Music and Philosophy. Cambridge University Press, 2014CAMPBELL, Edward y Peter O'Hagan (eds.), Pierre Boulez Studies. Cambridge University Press, 2016*COULT, Tom, “Pierre Boulez's Sur incises: Refraction, Crystallisation, and the Absent Idea(l)”. Tempo, vol. 67, n.º 264 (2013), pp. 2-21FERNÁNDEZ GUERRA, Jorge, Pierre Boulez. Círculo de Bellas Artes, 1985*GOLDMAN, Jonathan, “Boulez and the Spectralists between Descartes and Rameau: Who Said What about Whom?”. Perspectives of New Music, vol. 48, n.º 2 (2010), pp. 208-232*—, The Musical Language of Pierre Boulez: Writings and Compositions. Cambridge University Press, 2014GRIFFITHS, Paul, Boulez, Oxford Studies of Composers. Oxford University Press, 1978GULDBRANDSEN, Erling E. y Pierre Boulez, “Pierre Boulez in Interview, 1996 (I). Modernism, History, and Tradition”. Tempo, vol. 65, n.º 255 (2011), pp. 9-16*—, “Pierre Boulez in Interview, 1996 (II). Serialism Revisited”. Tempo, vol. 65, n.º 256 (2011), pp. 18-24*—, “Pierre Boulez in Interview, 1996 (III). Mallarmé, Musical Form, and Articulation”. Tempo, vol. 65, n.º 257 (2011), pp. 11-21*—, “Pierre Boulez in Interview, 1996 (IV). Some Broader Topics”. Tempo, vol. 65, n.º 258 (2011), pp. 37-43*JAMEUX, Dominique y Susan Bradshaw, Pierre Boulez. Harvard University Press, 1990KOBLYAKOV, Lev, Pierre Boulez: A World of Harmony. Routledge, 2010LELEU, Jean-Louis y Pascal Decroupet (eds.), Pierre Boulez: techniques d'écriture et enjeux esthétiques. Contrechamps, 2006MEÏMOUN, François, Entretien avec Pierre Boulez. La naissance d'un compositeur. Aedam Musicae, 2010—, La Construction du langage musical de Pierre Boulez: la première sonate pour piano. Aedam Musicae, 2019MERLIN, Christian, Pierre Boulez. Fayard, 2019NATTIEZ, Jean-Jacques, “De las artes plásticas a la música: Pierre Boulez, a la escucha de Paul Klee”. Bajo Palabra: Revista de Filosofía, época 2, n.º 7 (2012), pp. 117-128*O'HAGAN, Peter, “From Sketch to Score: A Facsimile Edition of Boulez's Le Marteau sans Maître”. Music & Letters, vol. 88, n.º 4 (2007), pp. 632-644*—, Pierre Boulez and the Piano: A Study in Style and Technique. Routledge, 2018PEYSER, Joan, To Boulez and Beyond. Scarecrow Press, 2008ROSEN, Charles, “La música para piano de Pierre Boulez”. Quodlibet: Revista de Especialización Musical, n.º 28 (2004), pp. 42-56*SALEM, Joseph Robert, Pierre Boulez: The Formative Years. University Press, 2023SAMUEL, Claude, Pierre Boulez. Éclats 2002. Mémoire du Livre, 2002WALTERS, David, “Artistic Orientations, Aesthetic Concepts, and the Limits of Explanation: An Interview with Pierre Boulez”. En: Contemporary Music: Theoretical and Philosophical Perspectives. Editado por Max Paddison e Irène Deliège. Ashgate, 2010*WILLIAMS, Alastair, “Répons, de Pierre Boulez ¿fantasmagoría o articulación de espacio?”. Quodlibet: Revista de Especialización Musical, n.º 26 (2003), pp. 51-68* *Documento disponible para su consulta en la Sala de Nuevas Músicas de la Biblioteca y Centro de Apoyo a la Investigación de la Fundación Juan March
OpenAI DevDay is almost here! Per tradition, we are hosting a DevDay pregame event for everyone coming to town! Join us with demos and gossip!Also sign up for related events across San Francisco: the AI DevTools Night, the xAI open house, the Replicate art show, the DevDay Watch Party (for non-attendees), Hack Night with OpenAI at Cloudflare. For everyone else, join the Latent Space Discord for our online watch party and find fellow AI Engineers in your city.OpenAI's recent o1 release (and Reflection 70b debacle) has reignited broad interest in agentic general reasoning and tree search methods.While we have covered some of the self-taught reasoning literature on the Latent Space Paper Club, it is notable that the Eric Zelikman ended up at xAI, whereas OpenAI's hiring of Noam Brown and now Shunyu suggests more interest in tool-using chain of thought/tree of thought/generator-verifier architectures for Level 3 Agents.We were more than delighted to learn that Shunyu is a fellow Latent Space enjoyer, and invited him back (after his first appearance on our NeurIPS 2023 pod) for a look through his academic career with Harrison Chase (one year after his first LS show).ReAct: Synergizing Reasoning and Acting in Language Modelspaper linkFollowing seminal Chain of Thought papers from Wei et al and Kojima et al, and reflecting on lessons from building the WebShop human ecommerce trajectory benchmark, Shunyu's first big hit, the ReAct paper showed that using LLMs to “generate both reasoning traces and task-specific actions in an interleaved manner” achieved remarkably greater performance (less hallucination/error propagation, higher ALFWorld/WebShop benchmark success) than CoT alone. In even better news, ReAct scales fabulously with finetuning:As a member of the elite Princeton NLP group, Shunyu was also a coauthor of the Reflexion paper, which we discuss in this pod.Tree of Thoughtspaper link hereShunyu's next major improvement on the CoT literature was Tree of Thoughts:Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role…ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.The beauty of ToT is it doesnt require pretraining with exotic methods like backspace tokens or other MCTS architectures. You can listen to Shunyu explain ToT in his own words on our NeurIPS pod, but also the ineffable Yannic Kilcher:Other WorkWe don't have the space to summarize the rest of Shunyu's work, you can listen to our pod with him now, and recommend the CoALA paper and his initial hit webinar with Harrison, today's guest cohost:as well as Shunyu's PhD Defense Lecture:as well as Shunyu's latest lecture covering a Brief History of LLM Agents:As usual, we are live on YouTube! Show Notes* Harrison Chase* LangChain, LangSmith, LangGraph* Shunyu Yao* Alec Radford* ReAct Paper* Hotpot QA* Tau Bench* WebShop* SWE-Agent* SWE-Bench* Trees of Thought* CoALA Paper* Related Episodes* Our Thomas Scialom (Meta) episode* Shunyu on our NeurIPS 2023 Best Papers episode* Harrison on our LangChain episode* Mentions* Sierra* Voyager* Jason Wei* Tavily* SERP API* ExaTimestamps* [00:00:00] Opening Song by Suno* [00:03:00] Introductions* [00:06:16] The ReAct paper* [00:12:09] Early applications of ReAct in LangChain* [00:17:15] Discussion of the Reflection paper* [00:22:35] Tree of Thoughts paper and search algorithms in language models* [00:27:21] SWE-Agent and SWE-Bench for coding benchmarks* [00:39:21] CoALA: Cognitive Architectures for Language Agents* [00:45:24] Agent-Computer Interfaces (ACI) and tool design for agents* [00:49:24] Designing frameworks for agents vs humans* [00:53:52] UX design for AI applications and agents* [00:59:53] Data and model improvements for agent capabilities* [01:19:10] TauBench* [01:23:09] Promising areas for AITranscriptAlessio [00:00:01]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.Swyx [00:00:12]: Hey, and today we have a super special episode. I actually always wanted to take like a selfie and go like, you know, POV, you're about to revolutionize the world of agents because we have two of the most awesome hiring agents in the house. So first, we're going to welcome back Harrison Chase. Welcome. Excited to be here. What's new with you recently in sort of like the 10, 20 second recap?Harrison [00:00:34]: Linkchain, Linksmith, Lingraph, pushing on all of them. Lots of cool stuff related to a lot of the stuff that we're going to talk about today, probably.Swyx [00:00:42]: Yeah.Alessio [00:00:43]: We'll mention it in there. And the Celtics won the title.Swyx [00:00:45]: And the Celtics won the title. You got that going on for you. I don't know. Is that like floorball? Handball? Baseball? Basketball.Alessio [00:00:52]: Basketball, basketball.Harrison [00:00:53]: Patriots aren't looking good though, so that's...Swyx [00:00:56]: And then Xun Yu, you've also been on the pod, but only in like a sort of oral paper presentation capacity. But welcome officially to the LinkedSpace pod.Shunyu [00:01:03]: Yeah, I've been a huge fan. So thanks for the invitation. Thanks.Swyx [00:01:07]: Well, it's an honor to have you on. You're one of like, you're maybe the first PhD thesis defense I've ever watched in like this AI world, because most people just publish single papers, but every paper of yours is a banger. So congrats.Shunyu [00:01:22]: Thanks.Swyx [00:01:24]: Yeah, maybe we'll just kick it off with, you know, what was your journey into using language models for agents? I like that your thesis advisor, I didn't catch his name, but he was like, you know... Karthik. Yeah. It's like, this guy just wanted to use language models and it was such a controversial pick at the time. Right.Shunyu [00:01:39]: The full story is that in undergrad, I did some computer vision research and that's how I got into AI. But at the time, I feel like, you know, you're just composing all the GAN or 3D perception or whatever together and it's not exciting anymore. And one day I just see this transformer paper and that's really cool. But I really got into language model only when I entered my PhD and met my advisor Karthik. So he was actually the second author of GPT-1 when he was like a visiting scientist at OpenAI. With Alec Redford?Swyx [00:02:10]: Yes.Shunyu [00:02:11]: Wow. That's what he told me. It's like back in OpenAI, they did this GPT-1 together and Ilya just said, Karthik, you should stay because we just solved the language. But apparently Karthik is not fully convinced. So he went to Princeton, started his professorship and I'm really grateful. So he accepted me as a student, even though I have no prior knowledge in NLP. And you know, we just met for the first time and he's like, you know, what do you want to do? And I'm like, you know, you have done those test game scenes. That's really cool. I wonder if we can just redo them with language models. And that's how the whole journey began. Awesome.Alessio [00:02:46]: So GPT-2 was out at the time? Yes, that was 2019.Shunyu [00:02:48]: Yeah.Alessio [00:02:49]: Way too dangerous to release. And then I guess the first work of yours that I came across was React, which was a big part of your defense. But also Harrison, when you came on The Pockets last year, you said that was one of the first papers that you saw when you were getting inspired for BlankChain. So maybe give a recap of why you thought it was cool, because you were already working in AI and machine learning. And then, yeah, you can kind of like intro the paper formally. What was that interesting to you specifically?Harrison [00:03:16]: Yeah, I mean, I think the interesting part was using these language models to interact with the outside world in some form. And I think in the paper, you mostly deal with Wikipedia. And I think there's some other data sets as well. But the outside world is the outside world. And so interacting with things that weren't present in the LLM and APIs and calling into them and thinking about the React reasoning and acting and kind of like combining those together and getting better results. I'd been playing around with LLMs, been talking with people who were playing around with LLMs. People were trying to get LLMs to call into APIs, do things, and it was always, how can they do it more reliably and better? And so this paper was basically a step in that direction. And I think really interesting and also really general as well. Like I think that's part of the appeal is just how general and simple in a good way, I think the idea was. So that it was really appealing for all those reasons.Shunyu [00:04:07]: Simple is always good. Yeah.Alessio [00:04:09]: Do you have a favorite part? Because I have one favorite part from your PhD defense, which I didn't understand when I read the paper, but you said something along the lines, React doesn't change the outside or the environment, but it does change the insight through the context, putting more things in the context. You're not actually changing any of the tools around you to work for you, but you're changing how the model thinks. And I think that was like a very profound thing when I, not that I've been using these tools for like 18 months. I'm like, I understand what you meant, but like to say that at the time you did the PhD defense was not trivial. Yeah.Shunyu [00:04:41]: Another way to put it is like thinking can be an extra tool that's useful.Alessio [00:04:47]: Makes sense. Checks out.Swyx [00:04:49]: Who would have thought? I think it's also more controversial within his world because everyone was trying to use RL for agents. And this is like the first kind of zero gradient type approach. Yeah.Shunyu [00:05:01]: I think the bigger kind of historical context is that we have this two big branches of AI. So if you think about RL, right, that's pretty much the equivalent of agent at a time. And it's like agent is equivalent to reinforcement learning and reinforcement learning is equivalent to whatever game environment they're using, right? Atari game or go or whatever. So you have like a pretty much, you know, you have a biased kind of like set of methodologies in terms of reinforcement learning and represents agents. On the other hand, I think NLP is like a historical kind of subject. It's not really into agents, right? It's more about reasoning. It's more about solving those concrete tasks. And if you look at SEL, right, like each task has its own track, right? Summarization has a track, question answering has a track. So I think really it's about rethinking agents in terms of what could be the new environments that we came to have is not just Atari games or whatever video games, but also those text games or language games. And also thinking about, could there be like a more general kind of methodology beyond just designing specific pipelines for each NLP task? That's like the bigger kind of context, I would say.Alessio [00:06:14]: Is there an inspiration spark moment that you remember or how did you come to this? We had Trida on the podcast and he mentioned he was really inspired working with like systems people to think about Flash Attention. What was your inspiration journey?Shunyu [00:06:27]: So actually before React, I spent the first two years of my PhD focusing on text-based games, or in other words, text adventure games. It's a very kind of small kind of research area and quite ad hoc, I would say. And there are like, I don't know, like 10 people working on that at the time. And have you guys heard of Zork 1, for example? So basically the idea is you have this game and you have text observations, like you see a monster, you see a dragon.Swyx [00:06:57]: You're eaten by a grue.Shunyu [00:06:58]: Yeah, you're eaten by a grue. And you have actions like kill the grue with a sword or whatever. And that's like a very typical setup of a text game. So I think one day after I've seen all the GPT-3 stuff, I just think about, you know, how can I solve the game? Like why those AI, you know, machine learning methods are pretty stupid, but we are pretty good at solving the game relatively, right? So for the context, the predominant method to solve this text game is obviously reinforcement learning. And the idea is you just try out an arrow in those games for like millions of steps and you kind of just overfit to the game. But there's no language understanding at all. And I'm like, why can't I solve the game better? And it's kind of like, because we think about the game, right? Like when we see this very complex text observation, like you see a grue and you might see a sword, you know, in the right of the room and you have to go through the wooden door to go to that room. You will think, you know, oh, I have to kill the monster and to kill that monster, I have to get the sword, I have to get the sword, I have to go, right? And this kind of thinking actually helps us kind of throw shots off the game. And it's like, why don't we also enable the text agents to think? And that's kind of the prototype of React. And I think that's actually very interesting because the prototype, I think, was around November of 2021. So that's even before like chain of thought or whatever came up. So we did a bunch of experiments in the text game, but it was not really working that well. Like those text games are just too hard. I think today it's still very hard. Like if you use GPD 4 to solve it, it's still very hard. So the change came when I started the internship in Google. And apparently Google care less about text game, they care more about what's more practical. So pretty much I just reapplied the idea, but to more practical kind of environments like Wikipedia or simpler text games like Alphard, and it just worked. It's kind of like you first have the idea and then you try to find the domains and the problems to demonstrate the idea, which is, I would say, different from most of the AI research, but it kind of worked out for me in that case.Swyx [00:09:09]: For Harrison, when you were implementing React, what were people applying React to in the early days?Harrison [00:09:14]: I think the first demo we did probably had like a calculator tool and a search tool. So like general things, we tried to make it pretty easy to write your own tools and plug in your own things. And so this is one of the things that we've seen in LangChain is people who build their own applications generally write their own tools. Like there are a few common ones. I'd say like the three common ones might be like a browser, a search tool, and a code interpreter. But then other than that-Swyx [00:09:37]: The LMS. Yep.Harrison [00:09:39]: Yeah, exactly. It matches up very nice with that. And we actually just redid like our integrations docs page, and if you go to the tool section, they like highlight those three, and then there's a bunch of like other ones. And there's such a long tail of other ones. But in practice, like when people go to production, they generally have their own tools or maybe one of those three, maybe some other ones, but like very, very few other ones. So yeah, I think the first demos was a search and a calculator one. And there's- What's the data set?Shunyu [00:10:04]: Hotpot QA.Harrison [00:10:05]: Yeah. Oh, so there's that one. And then there's like the celebrity one by the same author, I think.Swyx [00:10:09]: Olivier Wilde's boyfriend squared. Yeah. 0.23. Yeah. Right, right, right.Harrison [00:10:16]: I'm forgetting the name of the author, but there's-Swyx [00:10:17]: I was like, we're going to over-optimize for Olivier Wilde's boyfriend, and it's going to change next year or something.Harrison [00:10:21]: There's a few data sets kind of like in that vein that require multi-step kind of like reasoning and thinking. So one of the questions I actually had for you in this vein, like the React paper, there's a few things in there, or at least when I think of that, there's a few things that I think of. There's kind of like the specific prompting strategy. Then there's like this general idea of kind of like thinking and then taking an action. And then there's just even more general idea of just like taking actions in a loop. Today, like obviously language models have changed a lot. We have tool calling. The specific prompting strategy probably isn't used super heavily anymore. Would you say that like the concept of React is still used though? Or like do you think that tool calling and running tool calling in a loop, is that ReactSwyx [00:11:02]: in your mind?Shunyu [00:11:03]: I would say like it's like more implicitly used than explicitly used. To be fair, I think the contribution of React is actually twofold. So first is this idea of, you know, we should be able to use calls in a very general way. Like there should be a single kind of general method to handle interaction with various environments. I think React is the first paper to demonstrate the idea. But then I think later there are two form or whatever, and this becomes like a trivial idea. But I think at the time, that's like a pretty non-trivial thing. And I think the second contribution is this idea of what people call like inner monologue or thinking or reasoning or whatever, to be paired with tool use. I think that's still non-trivial because if you look at the default function calling or whatever, like there's no inner monologue. And in practice, that actually is important, especially if the tool that you use is pretty different from the training distribution of the language model. I think those are the two main things that are kind of inherited.Harrison [00:12:10]: On that note, I think OpenAI even recommended when you're doing tool calling, it's sometimes helpful to put a thought field in the tool, along with all the actual acquired arguments,Swyx [00:12:19]: and then have that one first.Harrison [00:12:20]: So it fills out that first, and they've shown that that's yielded better results. The reason I ask is just like this same concept is still alive, and I don't know whether to call it a React agent or not. I don't know what to call it. I think of it as React, like it's the same ideas that were in the paper, but it's obviously a very different implementation at this point in time. And so I just don't know what to call it.Shunyu [00:12:40]: I feel like people will sometimes think more in terms of different tools, right? Because if you think about a web agent versus, you know, like a function calling agent, calling a Python API, you would think of them as very different. But in some sense, the methodology is the same. It depends on how you view them, right? I think people will tend to think more in terms of the environment and the tools rather than the methodology. Or, in other words, I think the methodology is kind of trivial and simple, so people will try to focus more on the different tools. But I think it's good to have a single underlying principle of those things.Alessio [00:13:17]: How do you see the surface of React getting molded into the model? So a function calling is a good example of like, now the model does it. What about the thinking? Now most models that you use kind of do chain of thought on their own, they kind of produce steps. Do you think that more and more of this logic will be in the model? Or do you think the context window will still be the main driver of reasoning and thinking?Shunyu [00:13:39]: I think it's already default, right? You do some chain of thought and you do some tool call, the cost of adding the chain of thought is kind of relatively low compared to other things. So it's not hurting to do that. And I think it's already kind of common practice, I would say.Swyx [00:13:56]: This is a good place to bring in either Tree of Thought or Reflection, your pick.Shunyu [00:14:01]: Maybe Reflection, to respect the time order, I would say.Swyx [00:14:05]: Any backstory as well, like the people involved with NOAA and the Princeton group. We talked about this offline, but people don't understand how these research pieces come together and this ideation.Shunyu [00:14:15]: I think Reflection is mostly NOAA's work, I'm more like advising kind of role. The story is, I don't remember the time, but one day we just see this pre-print that's like Reflection and Autonomous Agent with memory or whatever. And it's kind of like an extension to React, which uses this self-reflection. I'm like, oh, somehow you've become very popular. And NOAA reached out to me, it's like, do you want to collaborate on this and make this from an archive pre-print to something more solid, like a conference submission? I'm like, sure. We started collaborating and we remain good friends today. And I think another interesting backstory is NOAA was contacted by OpenAI at the time. It's like, this is pretty cool, do you want to just work at OpenAI? And I think Sierra also reached out at the same time. It's like, this is pretty cool, do you want to work at Sierra? And I think NOAA chose Sierra, but it's pretty cool because he was still like a second year undergrad and he's a very smart kid.Swyx [00:15:16]: Based on one paper. Oh my god.Shunyu [00:15:19]: He's done some other research based on programming language or chemistry or whatever, but I think that's the paper that got the attention of OpenAI and Sierra.Swyx [00:15:28]: For those who haven't gone too deep on it, the way that you present the inside of React, can you do that also for reflection? Yeah.Shunyu [00:15:35]: I think one way to think of reflection is that the traditional idea of reinforcement learning is you have a scalar reward and then you somehow back-propagate the signal of the scalar reward to the rest of your neural network through whatever algorithm, like policy grading or A2C or whatever. And if you think about the real life, most of the reward signal is not scalar. It's like your boss told you, you should have done a better job in this, but you could jump on that or whatever. It's not like a scalar reward, like 29 or something. I think in general, humans deal more with long scalar reward, or you can say language feedback. And the way that they deal with language feedback also has this back-propagation process, right? Because you start from this, you did a good job on job B, and then you reflect what could have been done differently to change to make it better. And you kind of change your prompt, right? Basically, you change your prompt on how to do job A and how to do job B, and then you do the whole thing again. So it's really like a pipeline of language where in self-graded descent, you have something like text reasoning to replace those gradient descent algorithms. I think that's one way to think of reflection.Harrison [00:16:47]: One question I have about reflection is how general do you think the algorithm there is? And so for context, I think at LangChain and at other places as well, we found it pretty easy to implement React in a standard way. You plug in any tools and it kind of works off the shelf, can get it up and running. I don't think we have an off-the-shelf kind of implementation of reflection and kind of the general sense. I think the concepts, absolutely, we see used in different kind of specific cognitive architectures, but I don't think we have one that comes off the shelf. I don't think any of the other frameworks have one that comes off the shelf. And I'm curious whether that's because it's not general enough or it's complex as well, because it also requires running it more times.Swyx [00:17:28]: Maybe that's not feasible.Harrison [00:17:30]: I'm curious how you think about the generality, complexity. Should we have one that comes off the shelf?Shunyu [00:17:36]: I think the algorithm is general in the sense that it's just as general as other algorithms, if you think about policy grading or whatever, but it's not applicable to all tasks, just like other algorithms. So you can argue PPO is also general, but it works better for those set of tasks, but not on those set of tasks. I think it's the same situation for reflection. And I think a key bottleneck is the evaluator, right? Basically, you need to have a good sense of the signal. So for example, if you are trying to do a very hard reasoning task, say mathematics, for example, and you don't have any tools, you're operating in this chain of thought setup, then reflection will be pretty hard because in order to reflect upon your thoughts, you have to have a very good evaluator to judge whether your thought is good or not. But that might be as hard as solving the problem itself or even harder. The principle of self-reflection is probably more applicable if you have a good evaluator, for example, in the case of coding. If you have those arrows, then you can just reflect on that and how to solve the bug andSwyx [00:18:37]: stuff.Shunyu [00:18:38]: So I think another criteria is that it depends on the application, right? If you have this latency or whatever need for an actual application with an end-user, the end-user wouldn't let you do two hours of tree-of-thought or reflection, right? You need something as soon as possible. So in that case, maybe this is better to be used as a training time technique, right? You do those reflection or tree-of-thought or whatever, you get a lot of data, and then you try to use the data to train your model better. And then in test time, you still use something as simple as React, but that's already improved.Alessio [00:19:11]: And if you think of the Voyager paper as a way to store skills and then reuse them, how would you compare this reflective memory and at what point it's just ragging on the memory versus you want to start to fine-tune some of them or what's the next step once you get a very long reflective corpus? Yeah.Shunyu [00:19:30]: So I think there are two questions here. The first question is, what type of information or memory are you considering, right? Is it like semantic memory that stores knowledge about the word, or is it the episodic memory that stores trajectories or behaviors, or is it more of a procedural memory like in Voyager's case, like skills or code snippets that you can use to do actions, right?Swyx [00:19:54]: That's one dimension.Shunyu [00:19:55]: And the second dimension is obviously how you use the memory, either retrieving from it, using it in the context, or fine-tuning it. I think the Cognitive Architecture for Language Agents paper has a good categorization of all the different combinations. And of course, which way you use depends on the concrete application and the concrete need and the concrete task. But I think in general, it's good to think of those systematic dimensions and all the possible options there.Swyx [00:20:25]: Harrison also has in LangMEM, I think you did a presentation in my meetup, and I think you've done it at a couple other venues as well. User state, semantic memory, and append-only state, I think kind of maps to what you just said.Shunyu [00:20:38]: What is LangMEM? Can I give it like a quick...Harrison [00:20:40]: One of the modules of LangChain for a long time has been something around memory. And I think we're still obviously figuring out what that means, as is everyone kind of in the space. But one of the experiments that we did, and one of the proof of concepts that we did was, technically what it was is you would basically create threads, you'd push messages to those threads in the background, we process the data in a few ways. One, we put it into some semantic store, that's the semantic memory. And then two, we do some extraction and reasoning over the memories to extract. And we let the user define this, but extract key facts or anything that's of interest to the user. Those aren't exactly trajectories, they're maybe more closer to the procedural memory. Is that how you'd think about it or classify it?Shunyu [00:21:22]: Is it like about knowledge about the word, or is it more like how to do something?Swyx [00:21:27]: It's reflections, basically.Harrison [00:21:28]: So in generative worlds.Shunyu [00:21:30]: Generative agents.Swyx [00:21:31]: The Smallville. Yeah, the Smallville one.Harrison [00:21:33]: So the way that they had their memory there was they had the sequence of events, and that's kind of like the raw events that happened. But then every N events, they'd run some synthesis over those events for the LLM to insert its own memory, basically. It's that type of memory.Swyx [00:21:49]: I don't know how that would be classified.Shunyu [00:21:50]: I think of that as more of the semantic memory, but to be fair, I think it's just one way to think of that. But whether it's semantic memory or procedural memory or whatever memory, that's like an abstraction layer. But in terms of implementation, you can choose whatever implementation for whatever memory. So they're totally kind of orthogonal. I think it's more of a good way to think of the things, because from the history of cognitive science and cognitive architecture and how people study even neuroscience, that's the way people think of how the human brain organizes memory. And I think it's more useful as a way to think of things. But it's not like for semantic memory, you have to do this kind of way to retrieve or fine-tune, and for procedural memory, you have to do that. I think those are totally orthogonal kind of dimensions.Harrison [00:22:34]: How much background do you have in cognitive sciences, and how much do you model some of your thoughts on?Shunyu [00:22:40]: That's a great question, actually. I think one of the undergrad influences for my follow-up research is I was doing an internship at MIT's Computational Cognitive Science Lab with Josh Tannenbaum, and he's a very famous cognitive scientist. And I think a lot of his ideas still influence me today, like thinking of things in computational terms and getting interested in language and a lot of stuff, or even developing psychology kind of stuff. So I think it still influences me today.Swyx [00:23:14]: As a developer that tried out LangMEM, the way I view it is just it's a materialized view of a stream of logs. And if anything, that's just useful for context compression. I don't have to use the full context to run it over everything. But also it's kind of debuggable. If it's wrong, I can show it to the user, the user can manually fix it, and I can carry on. That's a really good analogy. I like that. I'm going to steal that. Sure. Please, please. You know I'm bullish on memory databases. I guess, Tree of Thoughts? Yeah, Tree of Thoughts.Shunyu [00:23:39]: I feel like I'm relieving the defense in like a podcast format. Yeah, no.Alessio [00:23:45]: I mean, you had a banger. Well, this is the one where you're already successful and we just highlight the glory. It was really good. You mentioned that since thinking is kind of like taking an action, you can use action searching algorithms to think of thinking. So just like you will use Tree Search to find the next thing. And the idea behind Tree of Thought is that you generate all these possible outcomes and then find the best tree to get to the end. Maybe back to the latency question, you can't really do that if you have to respond in real time. So what are maybe some of the most helpful use cases for things like this? Where have you seen people adopt it where the high latency is actually worth the wait?Shunyu [00:24:21]: For things that you don't care about latency, obviously. For example, if you're trying to do math, if you're just trying to come up with a proof. But I feel like one type of task is more about searching for a solution. You can try a hundred times, but if you find one solution, that's good. For example, if you're finding a math proof or if you're finding a good code to solve a problem or whatever, I think another type of task is more like reacting. For example, if you're doing customer service, you're like a web agent booking a ticket for an end user. Those are more reactive kind of tasks, or more real-time tasks. You have to do things fast. They might be easy, but you have to do it reliably. And you care more about can you solve 99% of the time out of a hundred. But for the type of search type of tasks, then you care more about can I find one solution out of a hundred. So it's kind of symmetric and different.Alessio [00:25:11]: Do you have any data or intuition from your user base? What's the split of these type of use cases? How many people are doing more reactive things and how many people are experimenting with deep, long search?Harrison [00:25:23]: I would say React's probably the most popular. I think there's aspects of reflection that get used. Tree of thought, probably the least so. There's a great tweet from Jason Wei, I think you're now a colleague, and he was talking about prompting strategies and how he thinks about them. And I think the four things that he had was, one, how easy is it to implement? How much compute does it take? How many tasks does it solve? And how much does it improve on those tasks? And I'd add a fifth, which is how likely is it to be relevant when the next generation of models come out? And I think if you look at those axes and then you look at React, reflection, tree of thought, it tracks that the ones that score better are used more. React is pretty easy to implement. Tree of thought's pretty hard to implement. The amount of compute, yeah, a lot more for tree of thought. The tasks and how much it improves, I don't have amazing visibility there. But I think if we're comparing React versus tree of thought, React just dominates the first two axes so much that my question around that was going to be like, how do you think about these prompting strategies, cognitive architectures, whatever you want to call them? When you're thinking of them, what are the axes that you're judging them on in your head when you're thinking whether it's a good one or a less good one?Swyx [00:26:38]: Right.Shunyu [00:26:39]: Right. I think there is a difference between a prompting method versus research, in the sense that for research, you don't really even care about does it actually work on practical tasks or does it help? Whatever. I think it's more about the idea or the principle, right? What is the direction that you're unblocking and whatever. And I think for an actual prompting method to solve a concrete problem, I would say simplicity is very important because the simpler it is, the less decision you have to make about it. And it's easier to design. It's easier to propagate. And it's easier to do stuff. So always try to be as simple as possible. And I think latency obviously is important. If you can do things fast and you don't want to do things slow. And I think in terms of the actual prompting method to use for a particular problem, I think we should all be in the minimalist kind of camp, right? You should try the minimum thing and see if it works. And if it doesn't work and there's absolute reason to add something, then you add something, right? If there's absolute reason that you need some tool, then you should add the tool thing. If there's absolute reason to add reflection or whatever, you should add that. Otherwise, if a chain of thought can already solve something, then you don't even need to use any of that.Harrison [00:27:57]: Yeah. Or if it's just better prompting can solve it. Like, you know, you could add a reflection step or you could make your instructions a little bit clearer.Swyx [00:28:03]: And it's a lot easier to do that.Shunyu [00:28:04]: I think another interesting thing is like, I personally have never done those kind of like weird tricks. I think all the prompts that I write are kind of like just talking to a human, right? It's like, I don't know. I never say something like, your grandma is dying and you have to solve it. I mean, those are cool, but I feel like we should all try to solve things in a very intuitive way. Just like talking to your co-worker. That should work 99% of the time. That's my personal take.Swyx [00:28:29]: The problem with how language models, at least in the GPC 3 era, was that they over-optimized to some sets of tokens in sequence. So like reading the Kojima et al. paper that was listing step-by-step, like he tried a bunch of them and they had wildly different results. It should not be the case, but it is the case. And hopefully we're getting better there.Shunyu [00:28:51]: Yeah. I think it's also like a timing thing in the sense that if you think about this whole line of language model, right? Like at the time it was just like a text generator. We don't have any idea how it's going to be used, right? And obviously at the time you will find all kinds of weird issues because it's not trained to do any of that, right? But then I think we have this loop where once we realize chain of thought is important or agent is important or tool using is important, what we see is today's language models are heavily optimized towards those things. So I think in some sense they become more reliable and robust over those use cases. And you don't need to do as much prompt engineering tricks anymore to solve those things. I feel like in some sense, I feel like prompt engineering even is like a slightly negative word at the time because it refers to all those kind of weird tricks that you have to apply. But I think we don't have to do that anymore. Like given today's progress, you should just be able to talk to like a coworker. And if you're clear and concrete and being reasonable, then it should do reasonable things for you.Swyx [00:29:51]: Yeah. The way I put this is you should not be a prompt engineer because it is the goal of the big labs to put you out of a job.Shunyu [00:29:58]: You should just be a good communicator. Like if you're a good communicator to humans, you should be a good communicator to languageSwyx [00:30:02]: models.Harrison [00:30:03]: That's the key though, because oftentimes people aren't good communicators to these language models and that is a very important skill and that's still messing around with the prompt. And so it depends what you're talking about when you're saying prompt engineer.Shunyu [00:30:14]: But do you think it's like very correlated with like, are they like a good communicator to humans? You know, it's like.Harrison [00:30:20]: It may be, but I also think I would say on average, people are probably worse at communicating with language models than to humans right now, at least, because I think we're still figuring out how to do it. You kind of expect it to be magical and there's probably some correlation, but I'd say there's also just like, people are worse at it right now than talking to humans.Shunyu [00:30:36]: We should make it like a, you know, like an elementary school class or whatever, how toSwyx [00:30:41]: talk to language models. Yeah. I don't know. Very pro that. Yeah. Before we leave the topic of trees and searching, not specific about QSTAR, but there's a lot of questions about MCTS and this combination of tree search and language models. And I just had to get in a question there about how seriously should people take this?Shunyu [00:30:59]: Again, I think it depends on the tasks, right? So MCTS was magical for Go, but it's probably not as magical for robotics, right? So I think right now the problem is not even that we don't have good methodologies, it's more about we don't have good tasks. It's also very interesting, right? Because if you look at my citation, it's like, obviously the most cited are React, Refraction and Tree of Thought. Those are methodologies. But I think like equally important, if not more important line of my work is like benchmarks and environments, right? Like WebShop or SuiteVenture or whatever. And I think in general, what people do in academia that I think is not good is they choose a very simple task, like Alford, and then they apply overly complex methods to show they improve 2%. I think you should probably match the level of complexity of your task and your method. I feel like where tasks are kind of far behind the method in some sense, right? Because we have some good test-time approaches, like whatever, React or Refraction or Tree of Thought, or like there are many, many more complicated test-time methods afterwards. But on the benchmark side, we have made a lot of good progress this year, last year. But I think we still need more progress towards that, like better coding benchmark, better web agent benchmark, better agent benchmark, not even for web or code. I think in general, we need to catch up with tasks.Harrison [00:32:27]: What are the biggest reasons in your mind why it lags behind?Shunyu [00:32:31]: I think incentive is one big reason. Like if you see, you know, all the master paper are cited like a hundred times more than the task paper. And also making a good benchmark is actually quite hard. It's almost like a different set of skills in some sense, right? I feel like if you want to build a good benchmark, you need to be like a good kind of product manager kind of mindset, right? You need to think about why people should use your benchmark, why it's challenging, why it's useful. If you think about like a PhD going into like a school, right? The prior skill that expected to have is more about, you know, can they code this method and can they just run experiments and can solve that? I think building a benchmark is not the typical prior skill that we have, but I think things are getting better. I think more and more people are starting to build benchmarks and people are saying that it's like a way to get more impact in some sense, right? Because like if you have a really good benchmark, a lot of people are going to use it. But if you have a super complicated test time method, like it's very hard for people to use it.Harrison [00:33:35]: Are evaluation metrics also part of the reason? Like for some of these tasks that we might want to ask these agents or language models to do, is it hard to evaluate them? And so it's hard to get an automated benchmark. Obviously with SweetBench you can, and with coding, it's easier, but.Shunyu [00:33:50]: I think that's part of the skillset thing that I mentioned, because I feel like it's like a product manager because there are many dimensions and you need to strike a balance and it's really hard, right? If you want to make sense, very easy to autogradable, like automatically gradable, like either to grade or either to evaluate, then you might lose some of the realness or practicality. Or like it might be practical, but it might not be as scalable, right? For example, if you think about text game, human have pre-annotated all the rewards and all the language are real. So it's pretty good on autogradable dimension and the practical dimension. If you think about, you know, practical, like actual English being practical, but it's not scalable, right? It takes like a year for experts to build that game. So it's not really that scalable. And I think part of the reason that SweetBench is so popular now is it kind of hits the balance between these three dimensions, right? Easy to evaluate and being actually practical and being scalable. Like if I were to criticize upon some of my prior work, I think webshop, like it's my initial attempt to get into benchmark world and I'm trying to do a good job striking the balance. But obviously we make it all gradable and it's really scalable, but then I think the practicality is not as high as actually just using GitHub issues, right? Because you're just creating those like synthetic tasks.Harrison [00:35:13]: Are there other areas besides coding that jump to mind as being really good for being autogradable?Shunyu [00:35:20]: Maybe mathematics.Swyx [00:35:21]: Classic. Yeah. Do you have thoughts on alpha proof, the new DeepMind paper? I think it's pretty cool.Shunyu [00:35:29]: I think it's more of a, you know, it's more of like a confidence boost or like sometimes, you know, the work is not even about, you know, the technical details or the methodology that it chooses or the concrete results. I think it's more about a signal, right?Swyx [00:35:47]: Yeah. Existence proof. Yeah.Shunyu [00:35:50]: Yeah. It can be done. This direction is exciting. It kind of encourages people to work more towards that direction. I think it's more like a boost of confidence, I would say.Swyx [00:35:59]: Yeah. So we're going to focus more on agents now and, you know, all of us have a special interest in coding agents. I would consider Devin to be the sort of biggest launch of the year as far as AI startups go. And you guys in the Princeton group worked on Suiagents alongside of Suibench. Tell us the story about Suiagent. Sure.Shunyu [00:36:21]: I think it's kind of like a triology, it's actually a series of three works now. So actually the first work is called Intercode, but it's not as famous, I know. And the second work is called Suibench and the third work is called Suiagent. And I'm just really confused why nobody is working on coding. You know, it's like a year ago, but I mean, not everybody's working on coding, obviously, but a year ago, like literally nobody was working on coding. I was really confused. And the people that were working on coding are, you know, trying to solve human evil in like a sick-to-sick way. There's no agent, there's no chain of thought, there's no anything, they're just, you know, fine tuning the model and improve some points and whatever, like, I was really confused because obviously coding is the best application for agents because it's autogradable, it's super important, you can make everything like API or code action, right? So I was confused and I collaborated with some of the students in Princeton and we have this work called Intercode and the idea is, first, if you care about coding, then you should solve coding in an interactive way, meaning more like a Jupyter Notebook kind of way than just writing a program and seeing if it fails or succeeds and stop, right? You should solve it in an interactive way because that's exactly how humans solve it, right? You don't have to, you know, write a program like next token, next token, next token and stop and never do any edits and you cannot really use any terminal or whatever tool. It doesn't make sense, right? And that's the way people are solving coding at the time, basically like sampling a program from a language model without chain of thought, without tool call, without refactoring, without anything. So the first point is we should solve coding in a very interactive way and that's a very general principle that applies for various coding benchmarks. And also, I think you can make a lot of the agent task kind of like interactive coding. If you have Python and you can call any package, then you can literally also browse internet or do whatever you want, like control a robot or whatever. So that seems to be a very general paradigm. But obviously I think a bottleneck is at the time we're still doing, you know, very simple tasks like human eval or whatever coding benchmark people proposed. They were super hard in 2021, like 20%, but they're like 95% already in 2023. So obviously the next step is we need a better benchmark. And Carlos and John, which are the first authors of Swaybench, I think they come up with this great idea that we should just script GitHub and solve whatever human engineers are solving. And I think it's actually pretty easy to come up with the idea. And I think in the first week, they already made a lot of progress. They script the GitHub and they make all the same, but then there's a lot of painful info work and whatever, you know. I think the idea is super easy, but the engineering is super hard. And I feel like that's a very typical signal of a good work in the AI era now.Swyx [00:39:17]: I think also, I think the filtering was challenging, because if you look at open source PRs, a lot of them are just like, you know, fixing typos. I think it's challenging.Shunyu [00:39:27]: And to be honest, we didn't do a perfect job at the time. So if you look at the recent blog post with OpenAI, we improved the filtering so that it's more solvable.Swyx [00:39:36]: I think OpenAI was just like, look, this is a thing now. We have to fix this. These students just rushed it.Shunyu [00:39:45]: It's a good convergence of interests for me.Alessio [00:39:48]: Was that tied to you joining OpenAI? Or was that just unrelated?Shunyu [00:39:52]: It's a coincidence for me, but it's a good coincidence.Swyx [00:39:55]: There is a history of anytime a big lab adopts a benchmark, they fix it. Otherwise, it's a broken benchmark.Shunyu [00:40:03]: So naturally, once we propose swimmage, the next step is to solve it. But I think the typical way you solve something now is you collect some training samples, or you design some complicated agent method, and then you try to solve it. Either super complicated prompt, or you build a better model with more training data. But I think at the time, we realized that even before those things, there's a fundamental problem with the interface or the tool that you're supposed to use. Because that's like an ignored problem in some sense. What your tool is, or how that matters for your task. So what we found concretely is that if you just use the text terminal off the shelf as a tool for those agents, there's a lot of problems. For example, if you edit something, there's no feedback. So you don't know whether your edit is good or not. That makes the agent very confused and makes a lot of mistakes. There are a lot of small problems, you would say. Well, you can try to do prompt engineering and improve that, but it turns out to be actually very hard. We realized that the interface design is actually a very omitted part of agent design. So we did this switch agent work. And the key idea is just, even before you talk about what the agent is, you should talk about what the environment is. You should make sure that the environment is actually friendly to whatever agent you're trying to apply. That's the same idea for humans. Text terminal is good for some tasks, like git, pool, or whatever. But it's not good if you want to look at browser and whatever. Also, browser is a good tool for some tasks, but it's not a good tool for other tasks. We need to talk about how design interface, in some sense, where we should treat agents as our customers. It's like when we treat humans as a customer, we design human computer interfaces. We design those beautiful desktops or browsers or whatever, so that it's very intuitive and easy for humans to use. And this whole great subject of HCI is all about that. I think now the research idea of switch agent is just, we should treat agents as our customers. And we should do like, you know… AICI.Swyx [00:42:16]: AICI, exactly.Harrison [00:42:18]: So what are the tools that a suite agent should have, or a coding agent in general should have?Shunyu [00:42:24]: For suite agent, it's like a modified text terminal, which kind of adapts to a lot of the patterns of language models to make it easier for language models to use. For example, now for edit, instead of having no feedback, it will actually have a feedback of, you know, actually here you introduced like a syntax error, and you should probably want to fix that, and there's an ended error there. And that makes it super easy for the model to actually do that. And there's other small things, like how exactly you write arguments, right? Like, do you want to write like a multi-line edit, or do you want to write a single line edit? I think it's more interesting to think about the way of the development process of an ACI rather than the actual ACI for like a concrete application. Because I think the general paradigm is very similar to HCI and psychology, right? Basically, for how people develop HCIs, they do behavior experiments on humans, right? I do every test, right? Like, which interface is actually better? And I do those behavior experiments, kind of like psychology experiments to humans, and I change things. And I think what's really interesting for me, for this three-agent paper, is we can probably do the same thing for agents, right? We can do every test for those agents and do behavior tests. And through the process, we not only invent better interfaces for those agents, that's the practical value, but we also better understand agents. Just like when we do those A-B tests, we do those HCI, we better understand humans. Doing those ACI experiments, we actually better understand agents. And that's pretty cool.Harrison [00:43:51]: Besides that A-B testing, what are other processes that people can use to think about this in a good way?Swyx [00:43:57]: That's a great question.Shunyu [00:43:58]: And I think three-agent is an initial work. And what we do is the kind of the naive approach, right? You just try some interface, and you see what's going wrong, and then you try to fix that. We do this kind of iterative fixing. But I think what's really interesting is there'll be a lot of future directions that's very promising if we can apply some of the HCI principles more systematically into the interface design. I think that would be a very cool interdisciplinary research opportunity.Harrison [00:44:26]: You talked a lot about agent-computer interfaces and interactions. What about human-to-agent UX patterns? Curious for any thoughts there that you might have.Swyx [00:44:38]: That's a great question.Shunyu [00:44:39]: And in some sense, I feel like prompt engineering is about human-to-agent interface. But I think there can be a lot of interesting research done about... So prompting is about how humans can better communicate with the agent. But I think there could be interesting research on how agents can better communicate with humans, right? When to ask questions, how to ask questions, what's the frequency of asking questions. And I think those kinds of stuff could be very cool research.Harrison [00:45:07]: Yeah, I think some of the most interesting stuff that I saw here was also related to coding with Devin from Cognition. And they had the three or four different panels where you had the chat, the browser, the terminal, and I guess the code editor as well.Swyx [00:45:19]: There's more now.Harrison [00:45:19]: There's more. Okay, I'm not up to date. Yeah, I think they also did a good job on ACI.Swyx [00:45:25]: I think that's the main learning I have from Devin. They cracked that. Actually, there was no foundational planning breakthrough. The planner is actually pretty simple, but ACI that they broke through on.Shunyu [00:45:35]: I think making the tool good and reliable is probably like 90% of the whole agent. Once the tool is actually good, then the agent design can be much, much simpler. On the other hand, if the tool is bad, then no matter how much you put into the agent design, planning or search or whatever, it's still going to be trash.Harrison [00:45:53]: Yeah, I'd argue the same. Same with like context and instructions. Like, yeah, go hand in hand.Alessio [00:46:00]: On the tool, how do you think about the tension of like, for both of you, I mean, you're building a library, so even more for you. The tension between making now a language or a library that is like easy for the agent to grasp and write versus one that is easy for like the human to grasp and write. Because, you know, the trend is like more and more code gets written by the agent. So why wouldn't you optimize the framework to be as easy as possible for the model versus for the person?Swyx [00:46:24]: I think it's possible to design an interfaceShunyu [00:46:25]: that's both friendly to humans and agents. But what do you think?Harrison [00:46:29]: We haven't thought about that from the perspective, like we're not trying to design LangChain or LangGraph to be friendly. But I mean, I think to be friendly for agents to write.Swyx [00:46:42]: But I mean, I think we see this with like,Harrison [00:46:43]: I saw some paper that used TypeScript notation instead of JSON notation for tool calling and it got a lot better performance. So it's definitely a thing. I haven't really heard of anyone designing like a syntax or a language explicitly for agents, but there's clearly syntaxes that are better.Shunyu [00:46:59]: I think function calling is a good example where it's like a good interface for both human programmers and for agents, right? Like for developers, it's actually a very friendly interface because it's very concrete and you don't have to do prompt engineering anymore. You can be very systematic. And for models, it's also pretty good, right? Like it can use all the existing coding content. So I think we need more of those kinds of designs.Swyx [00:47:21]: I will mostly agree and I'll slightly disagree in terms of this, which is like, whether designing for humans also overlaps with designing for AI. So Malte Ubo, who's the CTO of Vercel, who is creating basically JavaScript's competitor to LangChain, they're observing that basically, like if the API is easy to understand for humans, it's actually much easier to understand for LLMs, for example, because they're not overloaded functions. They don't behave differently under different contexts. They do one thing and they always work the same way. It's easy for humans, it's easy for LLMs. And like that makes a lot of sense. And obviously adding types is another one. Like type annotations only help give extra context, which is really great. So that's the agreement. And then a disagreement is that when I use structured output to do my chain of thought, I have found that I change my field names to hint to the LLM of what the field is supposed to do. So instead of saying topics, I'll say candidate topics. And that gives me a better result because the LLM was like, ah, this is just a draft thing I can use for chain of thought. And instead of like summaries, I'll say topic summaries to link the previous field to the current field. So like little stuff like that, I find myself optimizing for the LLM where I, as a human, would never do that. Interesting.Shunyu [00:48:32]: It's kind of like the way you optimize the prompt, it might be different for humans and for machines. You can have a common ground that's both clear for humans and agents, but to improve the human performance versus improving the agent performance, they might move to different directions.Swyx [00:48:48]: Might move different directions. There's a lot more use of metadata as well, like descriptions, comments, code comments, annotations and stuff like that. Yeah.Harrison [00:48:56]: I would argue that's just you communicatingSwyx [00:48:58]: to the agent what it should do.Harrison [00:49:00]: And maybe you need to communicate a little bit more than to humans because models aren't quite good enough yet.Swyx [00:49:06]: But like, I don't think that's crazy.Harrison [00:49:07]: I don't think that's like- It's not crazy.Swyx [00:49:09]: I will bring this in because it just happened to me yesterday. I was at the cursor office. They held their first user meetup and I was telling them about the LLM OS concept and why basically every interface, every tool was being redesigned for AIs to use rather than humans. And they're like, why? Like, can we just use Bing and Google for LLM search? Why must I use Exa? Or what's the other one that you guys work with?Harrison [00:49:32]: Tavilli.Swyx [00:49:33]: Tavilli. Web Search API dedicated for LLMs. What's the difference?Shunyu [00:49:36]: Exactly. To Bing API.Swyx [00:49:38]: Exactly.Harrison [00:49:38]: There weren't great APIs for search. Like the best one, like the one that we used initially in LangChain was SERP API, which is like maybe illegal. I'm not sure.Swyx [00:49:49]: And like, you know,Harrison [00:49:52]: and now there are like venture-backed companies.Swyx [00:49:53]: Shout out to DuckDuckGo, which is free.Harrison [00:49:55]: Yes, yes.Swyx [00:49:56]: Yeah.Harrison [00:49:56]: I do think there are some differences though. I think you want, like, I think generally these APIs try to return small amounts of text information, clear legible field. It's not a massive JSON blob. And I think that matters. I think like when you talk about designing tools, it's not only the, it's the interface in the entirety, not only the inputs, but also the outputs that really matter. And so I think they try to make the outputs.Shunyu [00:50:18]: They're doing ACI.Swyx [00:50:19]: Yeah, yeah, absolutely.Harrison [00:50:20]: Really?Swyx [00:50:21]: Like there's a whole set of industries that are just being redone for ACI. It's weird. And so my simple answer to them was like the error messages. When you give error messages, they should be basically prompts for the LLM to take and then self-correct. Then your error messages get more verbose, actually, than you normally would with a human. Stuff like that. Like a little, honestly, it's not that big. Again, like, is this worth a venture-backed industry? Unless you can tell us. But like, I think Code Interpreter, I think is a new thing. I hope so.Alessio [00:50:52]: We invested in it to be so.Shunyu [00:50:53]: I think that's a very interesting point. You're trying to optimize to the extreme, then obviously they're going to be different. For example, the error—Swyx [00:51:00]: Because we take it very seriously. Right.Shunyu [00:51:01]: The error for like language model, the longer the better. But for humans, that will make them very nervous and very tired, right? But I guess the point is more like, maybe we should try to find a co-optimized common ground as much as possible. And then if we have divergence, then we should try to diverge. But it's more philosophical now.Alessio [00:51:19]: But I think like part of it is like how you use it. So Google invented the PageRank because ideally you only click on one link, you know, like the top three should have the answer. But with models, it's like, well, you can get 20. So those searches are more like semantic grouping in a way. It's like for this query, I'll return you like 20, 30 things that are kind of good, you know? So it's less about ranking and it's more about grouping.Shunyu [00:51:42]: Another fundamental thing about HCI is the difference between human and machine's kind of memory limit, right? So I think what's really interesting about this concept HCI versus HCI is interfaces that's optimized for them. You can kind of understand some of the fundamental characteristics, differences of humans and machines, right? Why, you know, if you look at find or whatever terminal command, you know, you can only look at one thing at a time or that's because we have a very small working memory. You can only deal with one thing at a time. You can only look at one paragraph of text at the same time. So the interface for us is by design, you know, a small piece of information, but more temporal steps. But for machines, that should be the opposite, right? You should just give them a hundred different results and they should just decide in context what's the most relevant stuff and trade off the context for temporal steps. That's actually also better for language models because like the cost is smaller or whatever. So it's interesting to connect those interfaces to the fundamental kind of differences of those.Harrison [00:52:43]: When you said earlier, you know, we should try to design these to maybe be similar as possible and diverge if we need to.Swyx [00:52:49]: I actually don't have a problem with them diverging nowHarrison [00:52:51]: and seeing venture-backed startups emerging now because we are different from machines code AI. And it's just so early on, like they may still look kind of similar and they may still be small differences, but it's still just so early. And I think we'll only discover more ways that they differ. And so I'm totally fine with them kind of like diverging earlySwyx [00:53:10]: and optimizing for the...Harrison [00:53:11]: I agree. I think it's more like, you know,Shunyu [00:53:14]: we should obviously try to optimize human interface just for humans. We're already doing that for 50 years. We should optimize agent interface just for agents, but we might also try to co-optimize both and see how far we can get. There's enough people to try all three directions. Yeah.Swyx [00:53:31]: There's a thesis I sometimes push, which is the sour lesson as opposed to the bitter lesson, which we're always inspired by human development, but actually AI develops its own path.Shunyu [00:53:40]: Right. We need to understand better, you know, what are the fundamental differences between those creatures.Swyx [00:53:45]: It's funny when really early on this pod, you were like, how much grounding do you have in cognitive development and human brain stuff? And I'm like
Show Notes: https://wetflyswing.com/662 Presented By: Stonefly Nets, TroutRoutes, Smitty's Fly Box In today's Littoral Zone episode, Phil chats with Jason Randall about what trout see and why it matters. Jason has written four books, including his famous trout trilogy, which dives deep into how trout behave in their environment. While Jason's passion lies in rivers and streams, the research and knowledge he's amassed is equally beneficial to stillwater fly fishers. ]His understanding of how trout see is crucial information for all fly fishers, guiding both our pattern selection and presentation techniques, information that helps us consistently make the correct pattern and presentation choice. Show Notes with Jason Randall on What Trout See and Why it Matters. 2:20 - Jason Randall graduated as a veterinarian and did postgraduate work in fish health and medicine. Although he considered a career in fisheries, he ultimately chose private practice. 4:16 - Jason started fly fishing around 40 years ago but got frustrated early on due to a lack of guidance. He took a break, then later tried again this time with the help of great mentors. Jason says his passion really started during a trip to Colorado. A guide introduced him to a caddis hatch that transformed the river into a feeding frenzy of trout. Watching the stream come alive with caddis and rising fish was a game-changer for Jason. 08:21 - Jason says he was lucky to have some amazing mentors like George Kustin who guided him in fly fishing and taught him about wet flies and soft tackles. Lefty Kreh took Jason under his wing. 09:31 - Jason also works with Temple Fork Outfitters on rod design and prototype testing. This year, they introduced a new European Nymphing Rod called the Elevare, which won Best New Rod at ICAST 2024. Books by Jason Randall 13:00 - Jason's trout fishing trilogy started in 2012 with Jay Nichols from Stackpole Books. The trilogy covers: Feeding Time: A Fly Fisher's Guide to What, When and Where Trout Eat Trout Sense: A Fly Fisher's Guide to What Trout See, Hear, and Smell Moving Water: A Fly Fisher's Guide to Currents Jason also wrote Nymph Masters, a collaborative effort featuring tips from top nymph anglers like Gary Borger and Lefty Kreh. Trout Sense 17:00 - Trout begins life as prey, eating small organisms like plankton. As they grow, they become predators, feeding on insects, crustaceans, and even small fish or mammals. They retain the wide-set eyes of prey for spotting threats and the sharp focus of predators for hunting. This makes them tricky to catch. 21:10 - Jason explains how light works differently underwater, which affects how trout see. Refraction, or the bending of light when it moves from air to water, can also trick us into thinking we're casting right over a fish, but we could be a few feet off. 26:08 - Jason dives into how color fades underwater, starting with red, and how different colors are absorbed at various depths. Fluorescent colors like chartreuse stand out the most and create a strong contrast, which trout notice. 29:18 - Unlike humans, a trout's pupils don't adjust to light, and their eyes have a football-like shape that lets them see clearly both in front and to the side. Search Image and How Trout Decide to Eat 42:11 - Trout uses a "search image" to figure out what's food and what's not. They focus on four things: size, shape (profile), movement, and color. If a fish keeps ignoring your fly from far away, it's probably the size or shape that's off. But if they come close, and then turn away, Jason says that it may be a color-based refusal. Show Notes: https://wetflyswing.com/662
For the Record is a conversation series where we speak with all manner of music heads — DJs, music journos, indie label captains, record shop owners, listening bar kingpins, et al — about their stories + the music that makes them. Join the Crate Coalition: https://discord.gg/sAaG6a7bv4 Sébastien Devaud aka Agoria, born in Lyon on January 16, 1976 is a French electronic music producer, DJ and digital artist. Agoria started his music career in the mid-1990s and gained popularity in the early 2000s with his debut album "Blossom" released in 2003. He is known for his unique blend of techno, house, and ambient music, which has earned him a reputation as one of the most innovative and influential electronic music producers of his generation. Agoria has released several albums, EPs, and singles throughout his career, including "The Green Armchair" (2006), "Impermanence" (2011), and "Drift" (2019). He has released 5 albums with both major Universal Music, Virgin and indie labels Pias. Agoria has been an early contributor to the French electronic music scene with the creation of his own label Sapiens, and Lyon electronic music festival Les Nuits Sonores. He has also collaborated with various artists, including Tricky, Ela Minus, Carl Craig, Neneh Cherry, Noemie, Rami Khalifé and Blasé. Agoria has performed at major music festivals and clubs around the world, including Coachella, Sonar, Melt, We Love Green, 3points east,… He has also been very active and creative in the art scene, he collaborated with Philippe Parreno and Nicolas Becker on art exhibitions and performances at the Armory Park in NYC (2018), at the turbine hall of Tate Modern in London (2018), and started his own solo show in Miami Art Basel (2019). Combining art, music and science in his work, he collaborates with biologists, neuroscientists, and philosophers. He is considered the trailblazer of Biological Generative Art. Agoria's work has been exhibited all around the world lately at major NFT events: in NYC on Times Square during NFT.NYC, in Barcelona at Sonar (co-curated by Antonia Folguera and Diane Drubay), at Refraction festival in NYC, at Proof of People show in London (curated by VerticalCrypto), and during Berlin Art Week (curated by Anika Meier) in September. Last but not least, he has been invited last March to exhibit Phytocene and share his vision about web3 in front of United Nations.
In this episode, Drs. Kar and Bilkhu share their personal tips to help new grad ODs master refraction skills. Learn from their experiences to reduce pesky prescription redos, create an exceptional patient experience, and even boost optical sales as an associate OD. These insights will help you build trust with your patients and ensure they leave your office feeling truly appreciated from just a simple refraction. Tune in, and take your skills to the next level! The Four Eyes Podcast is brought to you by YoungOD Connect
PREMIERE: Aeikus, Pedro Capelossi - Mist [REFRACTION] by MixCult Records & Radio
Lunar halos are a fairly common sight that occur when high thin clouds containing millions of tiny ice crystals cover much of the sky.
©️ 2024 Gain Records | Gain Plus www.gainrecords.com #SuperTechno #WeAreWhatWePlay #Dreamtechno
This is an episode of our Patreon-exclusive Q&A show that I do with my producer Anton. Join our Patreon. Here's the call for questions for this month's Q&A: https://www.patreon.com/posts/patrons-send-in-108268455 0:00:00 Intro 0:01:12 Summer break 0:04:46 Another Einstein 0:09:52 Detecting nuclear wars on exoplanets 0:11:22 Planets at Proxima Centauri C 0:13:38 Eclipses on other planets 0:16:03 Upcoming big missions 0:22:22 Rotating black holes 0:27:25 Abiogenesis 0:31:08 Dark matter interactions 0:33:02 Orbital collisions 0:36:19 Antimatter in the magnetosphere 0:37:40 SGL alternative approach 0:40:34 Colonizing space 0:45:07 Sources of gravitational waves 0:47:01 Accretion disks 0:49:43 Galactic Lagrange points 0:52:30 Starship and interstellar missions 0:58:06 Why doesn't Super Heavy and Starship float 1:04:31 Inside of black holes 1:06:46 Astronauts in permanently shadowed Moon regions 1:11:03 Gravitational waves from dark matter 1:12:53 Docking in space 1:16:09 Starship mittens 1:20:15 Age of light 1:23:42 Sun's magnetic field flipping 1:25:14 Earth-size Moon around a Super Duper Jupiter 1:27:26 Hollow black holes 1:29:47 Dark matter and antigravity 1:31:10 Starliner 1:34:17 Ancient astronomical constructions 1:39:13 Questions from Q&As 1:41:45 Regolith problem 1:43:53 Mirror Universe 1:48:03 Fastest approaching star 1:50:04 Refraction of gravitational waves 1:51:30 Distance to Voyagers 1:52:57 Panspermia 1:55:48 SpaceX independence 2:00:46 Capitalism in space 2:03:38 Cosmic horizons 2:06:44 Orion VS Starliner 2:09:51 Effects from supernovae 2:12:11 Private streaming services 2:17:26 Starting fusion in Jupiter 2:22:32 Returning Starship 2:28:28 ZZ Top songs 2:30:12 Meteorites from Venus 2:31:16 Pioneer 6
This is an episode of our Patreon-exclusive Q&A show that I do with my producer Anton. Join our Patreon. Here's the call for questions for this month's Q&A: https://www.patreon.com/posts/patrons-send-in-108268455 0:00:00 Intro 0:01:12 Summer break 0:04:46 Another Einstein 0:09:52 Detecting nuclear wars on exoplanets 0:11:22 Planets at Proxima Centauri C 0:13:38 Eclipses on other planets 0:16:03 Upcoming big missions 0:22:22 Rotating black holes 0:27:25 Abiogenesis 0:31:08 Dark matter interactions 0:33:02 Orbital collisions 0:36:19 Antimatter in the magnetosphere 0:37:40 SGL alternative approach 0:40:34 Colonizing space 0:45:07 Sources of gravitational waves 0:47:01 Accretion disks 0:49:43 Galactic Lagrange points 0:52:30 Starship and interstellar missions 0:58:06 Why doesn't Super Heavy and Starship float 1:04:31 Inside of black holes 1:06:46 Astronauts in permanently shadowed Moon regions 1:11:03 Gravitational waves from dark matter 1:12:53 Docking in space 1:16:09 Starship mittens 1:20:15 Age of light 1:23:42 Sun's magnetic field flipping 1:25:14 Earth-size Moon around a Super Duper Jupiter 1:27:26 Hollow black holes 1:29:47 Dark matter and antigravity 1:31:10 Starliner 1:34:17 Ancient astronomical constructions 1:39:13 Questions from Q&As 1:41:45 Regolith problem 1:43:53 Mirror Universe 1:48:03 Fastest approaching star 1:50:04 Refraction of gravitational waves 1:51:30 Distance to Voyagers 1:52:57 Panspermia 1:55:48 SpaceX independence 2:00:46 Capitalism in space 2:03:38 Cosmic horizons 2:06:44 Orion VS Starliner 2:09:51 Effects from supernovae 2:12:11 Private streaming services 2:17:26 Starting fusion in Jupiter 2:22:32 Returning Starship 2:28:28 ZZ Top songs 2:30:12 Meteorites from Venus 2:31:16 Pioneer 6
Malcolm Levy, Raf Katigbak and Greg Liburd are co-founders of Refraction. An artist-owned community leading the next wave of digital art, music and culture — online, onchain and IRL.They all have extensive backgrounds in media, culture, festivals, films, brands. As artists, designers, writers, producers and creative people. Raf was early at VICE. Greg did a bunch with Jordan. Malcolm was the Director of the New Forms Festival 2001-16, and Curator of CODE Live during the 2010 Olympics Games.Today we reveal that Refraction and UFO are teaming up via our radio station. And exclusive alpha is shared for the future of Refraction for the first time on this show. ufo.fmnews.ufo.fmkarma.ufo.fm SPONSORSHigher is a lifestyle. A community of optimists on Base that formed on Farcaster. To join high agency crypto natives in a new experiment in onchain brands, visit higher.partyParagraph is where you can create, distribute & monetize - on your own terms. This publishing platform enables creators to mint posts as collectible content and send token-gated newsletters directly to wallet addresses. To get started with these radically powerful tools, visit paragraph.xyzOPEN has built state of the art onchain ticketing infrastructure that puts artists, organizers and fans back in control. Anyone can participate in the ticketing revolution through the DAO of creators, builders and token holders. To join the onchain tickets movement with OPEN, head to onopen.xyzLore is a group wallet experience for co-ownership. Own expensive NFTs, move memecoins markets and win crypto games together. Check out how you could use Lore with your friends to earn more than you could alone at lore.xyz.
Russell and Major Wragge reach a turning point. The kids reel from the upheaval and attempt to carry on. Dr. Hagen and Colonel Harstad come to a new understanding.This program was directed and executive produced by H.G. Zeissler. Text copyright 2022 by Jason Kent Nord. Featuring the voice talents of Adam Anagnostou as Cale Rhodes, Mike Kelly as Russell Rhodes, John Yonker as Dr. Elliot Hagen, and Luke Langfeldt as Major Wragge. Illustrations, including cover and episode art, by Meredith Tuve. Sound design by Dan Stephans. Story edits by Emily Nord and H.G. Zeissler.A special thanks to our founding SPRQ storytellers.Content warning: Profanity and Animal Violence. Please listen at your own discretion. Rest assured that no animals—cosmic or earthly—were harmed in the production of this episode.Enjoyed what you heard? Check out more SPRQ stories and find out m ore about SPRQ Media on our website: sprqmedia.com. Or check us out on Instagram or Facebook @sprqmedia. Links in episode notes.Interested in telling stories? Apply to be a SPRQ storyteller today! We're looking for writers, editors, composers, voice talent, and more. It takes a village to tell a story and we need you! Link in episode notes.Audio production copyright 2024 by SPRQ Media LLC, all rights reserved.
In this episode of LIGHT TALK, The Lumen Brothers talk about everything from Stan's Retirement Plans to How Business is Done. Join Steve, Stan, and David as they pontificate about: Vectorworks raises their prices again; Loyalty and Integrity; How long should you keep your lighting records; "Stan the Man Sentimental Kaye"; The importance of archiving; Refraction and Interference; "The Magic Color Filter"; Materials and Light; Alcantara; Painting the floor with light; Miss Kitty; How to use a magic sheet; and Memorizing channel numbers. Nothing is Taboo, Nothing is Sacred, and Very Little Makes Sense.
Once more, it's time for a weekly dose of Stuff to Blow Your Mind and Weirdhouse Cinema listener mail...See omnystudio.com/listener for privacy information.
For EP7, we spoke with Raf Katigbak, Co-Founder of Refraction, an artist-owned community leading the next wave of digital art, music and culture. Raf has a long background and history in the world of culture, and we had an awesome conversation talking about how music, IRL events, and blockchain can all come together to give creators greater control and opportunities within their industry.
Episode: 3293 A look at water in its many forms as a subject of university research. Today, let's talk about research and water.
Have you ever seen a halo around the Moon? Exactly how those halos form remains a topic of research around the world.
This podcast has been graciously sponsored by JewishPodcasts.fm. There is much overhead to maintain this service so please help us continue our goal of helping Jewish lecturers become podcasters and support us with a donation: https://thechesedfund.com/jewishpodcasts/donate
Want to say goodbye to glasses and contact lenses? Then consider getting LASIK surgery from Pacific ClearVision Institute (541-343-5000) in Eugene, OR! Go to https://pcvi.com/treatments-eugene/lasik to find out more. Pacific ClearVision Institute City: Eugene Address: 1125 Darlene Lane Suite 100 Website https://pcvi.com/ Phone +15413435000 Email jsingleton@pcvi.com
M
M
Bruce Rettig, author of Refraction, told us his book has now won eight awards and an excerpt is nominated for the Pushcart Prize. Go, Bruce!We lost Rudolph Isley of the Isley Brothers. Give a SHOUTDel endorses the return of cursive writing in California. Dave, a leftie, was tortured by cursive writing class in grade school. A pox upon you, P.O. Peterson.Del does NOT endorse the practice of quarterbacks licking their fingers between every play. Dave saw Killers of the Summer Moon. Five stars. Not to be confused with Empire of the Summer Moon, another excellent but sad story.The HR gets a Speaker...finally. Next week we talk with Pat Hays, author of Silicon Planet: My Life in Computer ChipsIn three weeks we interview Agnes Schiffer, author of Sabine's OdysseyGive us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
The Bucks welcome back Bridey Thelen-Heidel and Bruce Rettig, two Lake Tahoe authors who shared their thoughts on their books with us early in the year. If you want to hear the original interviews, Bridey was episode 118 and Bruce was episode 125.Bridey talked about the off-Broadway show in which she performed and also gave us the good news that her book, tentatively titled Bright Eyes, will be published August 27, 2024. You can check out her work at her blog, Bright Eyes. Bridey's day job is teaching high school English and she has had Bruce's three children as students.Bruce has received numerous awards for his memoir, Refraction. While still busy promoting Refraction and doing his day job as a graphic designer, Bruce has begun working on his next project, which will actually be a trilogy.We brought up the notorious Hank the Tank, a female bear that has been terrorizing Tahoe and Bridey related that a bear just broke into her garage and chugged down the oat milk in the fridge. She had to call Tahoe Toogee to bear-proof her house. We're hoping to have Toogee on as a guest soon.Bruce provided a video of a bear cub rescue he performed a few years ago. We can bearly wait to have Bridey and Bruce back again. Thanks to the both for revisiting the Bucks.Give us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
Steph Alinsug is a brand builder, storyteller, community organiser, and media creative. Founder of VESSEL. An emergent startup designing for the onchain media ecosystem.Until recently she was the Media Steward at Seed Club. Internet native accelerator with a growing network of alumni projects including ALLSHIPS, Cabin, Forefront, Metalabel, Poolsuite, Protein, Refraction, Songcamp, Take Up Space and Water & Music.Steph helped to grow the brand, events like Seed Club Demo Day, and a media network including the CLUB podcast later renamed Building at the Edges, hosted by Jess Sloss.In this episode we talk about Broadcast - an inaugural web3 media event held in Brooklyn May 2023. An invite only summit focusing on the future of onchain media. It was a collaboration between VESSEL, and Foster which is a writers collective based in New York, and Seed Club.Participants included Zora Zine, Optimism, Gitcoin, Station Labs, Lens, Metalabel, Protein, Boys Club, Yup, Pleasr, Base, Friends With Benefits, Koop, Beem, Hypeshot, Joke, Tribute Labs, Forefront, Folklore, POAP, and IDEO ++We talk with Steph about her learnings from experiences joining the crypto space, how she became involved with the team at Seed Club and collaborating across the network as it's grown over the past years.SPONSORSZerion combines every corner of web3 in a simple and intuitive app for self-custodial humans. Discover the hottest NFT collections, track your DeFi rewards, and vote in DAOs across 10+ chains. Get started at zerion.ioLens Protocol is the open-source tech stack for building decentralized social media applications. A permissionless and transparent social graph that is owned by the user. Lens is the last social media handle you'll ever need to create. Visit lens.xyzYup is the best of web3 all in one feed. Aggregating content across Lens, Farcaster, Mirror, NFTs, and Crypto Twitter. Search across platforms, customize your feed, and show off your NFTs and POAPS on your profile. Visit yup.io
Valentin, whom we've talked about before on TOB, joins us to share his story of growing up in Russia, living in East Germany for awhile, getting a good degree and a good job, and marrying his sweetheart, only to see Russia evolving into something else under Putin. In 2013-2014 he began formulating a plan to leave the country and eventually was able to do it. Listen to one man's story.Val also, amazingly, stumped Del in a round of Stump the Buck.Last week's episode with Frank Young, has now been downloaded in nine countries.Joel [Episode 127] has collaborated with a photographer to write poems for her black and white nature pictures. Bruce Rettig [125] announced his book, Refraction, won a Gold Nautilus Award. Congrats to Bruce.Give us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
Discover the fascinating connection between optics and pyramids, as they both utilize the principle of refraction. Learn how pyramids, mountain tops, penthouses, and alchemist towers focus chi in unique ways, creating a powerful upflow of energy. Dive into the world of chi refraction with us! e-mail for appointments: https://www.wolfgangarndt8@gmail.com https://www.facebook.com/The-Gaia-Eagle-Wolf-Healing-Circle website: https://www.toolsforascensionbywolfgang.com/ YouTube Channel: http://www.youtube.com/@toolsforascensionbyWolfgang Tags: #Pyramids, #ChiRefraction, #Energy, #Spirituality, #Optics, #MountainTops, #Penthouses, #Alchemists, #MysteriesUnveiled, #EnergyFlow, #PowerOfChi
EPISODE 1432: In this KEEN ON show, Andrew talks to the author of CONSTRUCTING A NERVOUS SYSTEM, Margo Jefferson, about Ella Fitzergerald, Cabinet Making, Josephine Baker and the Refraction of her Life through Art The winner of a Pulitzer Prize for criticism, MARGO JEFFERSON previously served as book and arts critic for Newsweek and the New York Times. Her writing has appeared in, among other publications, Vogue, New York Magazine, The Nation, and Guernica. Her memoir, Negroland, received the National Book Critics Circle Award for Autobiography. She is also the author of On Michael Jackson and is a professor of writing at Columbia University School of the Arts. Her latest book is Constructing a Nervous System (2022) Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Learn more about your ad choices. Visit megaphone.fm/adchoices
It took Anwen almost a year to realize that it wasn't just a bad hookup.Listen to more from Reckonings at https://reckonings.showSupport Love + Radio: https://loveandradio.org/memberPlaylists, transcript and more: https://loveandradio.orgSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
I read from Double Gloucester to double refraction. Kate was also on episode #A158 and her husband Chris was on episodes #A156, #A157, and #A159. Kate got her cheeses mixed up. Please forgive her. She thought that Crunchy Red Fox is a type of Double Gloucester cheese but it's really a Red Leicester cheese https://www.beltonfarm.co.uk/our-cheese/red-fox/ Two "They Might Be Giants" music clips! https://youtu.be/ZK6YP1Smbxk https://youtu.be/ty33v7UYYbw "The Eagle" in Cambridge is the pub in England where "Francis Crick announced that he and James Watson had discovered the DNA double-helix." In addition, "The plaque outside was recently updated with a hand-scrawled “+ Franklin” by an anonymous passerby to highlight Rosalind Franklin's key contributions to understanding DNA." https://www.atlasobscura.com/places/the-eagle-cambridge-england I should probably watch the film "Double Indemnity". https://www.imdb.com/title/tt0036775/ The word of the episode is "Double Gloucester". We don't really know why there are single and double variations of this kind of cheese. https://en.wikipedia.org/wiki/Gloucester_cheese Theme music from Tom Maslowski https://zestysol.com/ Merchandising! https://www.teepublic.com/user/spejampar "The Dictionary - Letter A" on YouTube "The Dictionary - Letter B" on YouTube "The Dictionary - Letter C" on YouTube "The Dictionary - Letter D" on YouTube Featured in a Top 10 Dictionary Podcasts list! https://blog.feedspot.com/dictionary_podcasts/ Backwards Talking on YouTube: https://www.youtube.com/playlist?list=PLmIujMwEDbgZUexyR90jaTEEVmAYcCzuq dictionarypod@gmail.com https://www.facebook.com/thedictionarypod/ https://twitter.com/dictionarypod https://www.instagram.com/dictionarypod/ https://www.patreon.com/spejampar https://www.tiktok.com/@spejampar 917-727-5757
A discussion on Bruce Rettig's book Refraction, which YOU really need to buy, leads to a sharing of near-death experiences by the Bucks. Do YOU have a N-D experience you'd like to share. Send us a note or a voicemail. We might read it.Podcasters have fallen out of favor [if they were ever in favor?] as some women will not date them, according to the NYT. Fortunately, the Bucks have been out of the dating pool for decades and their wives don't listen to them or their podcasts. There's nothing new under the sun.Make a friend. We all need them. Learn more here. Seriously.Dave reviews The Ploughmen by Kim Zupan. Five stars. Speaking of time, how do you tell time on the moon? What's with this changing our time twice a year? A dumb idea, whines Dave. Just stay on Standard Time. Del, always thinking creatively, suggests we create Bladder Time, a half-hour timezone. This is where we lose half of you.Send us your comments. Or leave a voicemail below. Forward this to a friend. Or someone you'd like to be your friend.Give us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
The Bucks interview Bruce Rettig, who talks about his experiences working 300 miles north of the Arctic Circle in the summers during his college years in the mid-1980s. Bruce talks about his roots, the work ethic instilled in him by his father, and arriving at Prudhoe Bay as a total greenhorn to work 12-hour days doing monotonous and sometimes treacherous work in support of the Alaska pipeline. Bruce meets some interesting characters along the way, learns some lessons about people, about the land, about shades of gray, and about himself.You can order his book HERE.You can learn more about Bruce on his website HERE. Take a look and watch the short video.You can workshop your writing at Tahoe Writers Works with Bruce and his group here....WHAT ARE YOU GOING TO DO WITH THE REST OF YOUR LIFE?Give us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
The Bucks slow down to a trot [do bucks trot?] to review recent episodes. #118 with Bridey Thelen-Heidel reading We Owned the Night from her upcoming memoir, Bright Eyes garnered the most listener feedback of any episode and is on the way to being our most downloaded episode ever, as well. You can catch Bridey's upcoming off-Broadway performance on April 1 in NYC. You can purchase a livestream ticket HERE and watch her from the comfort of your home. Finally, you can listen to Bridey read another story RIGHT HERE. #121 with Mary Meaney and Annie Tregouet, where they told the story of St-Omer, a small town in France, taking in over 500 Ukrainian refugees. Lots of feedback here also with several people dropping a note to say they donated to Mary's non-profit. #122 with Phoenix, a Ukrainian doctor wounded on the frontlines. Joining Phoenix were Roland Bartetzko, a logistics expert supplying medics with needed medical supplies, and Paul Tregouet in France, who maintains the front end of the supply chain. This was our most widely downloaded episode, with listeners in 39 countries. #123 with Laura Hayes of World Central Kitchen, who helps mobilize chefs around the world to serve hot, healthy meals to victims of disasters around the world. WCK under Chef José Andrés has served over 250 million meals since 2010. Laura also discussed a couple of her award-winning stories she's written as well as a stint teaching English in Japan. Looking forward, there is the upcoming episode with Bruce Rettig next week discussing his book Refraction about his college summers in the ‘80s working above the Arctic circle for a maritime company ferrying equipment for the Alaska pipeline. Bring your winter jacket. We will be interviewing some tough Ukrainian women in St-Omer and two very smart Ukrainian students in France. We are negotiating wit two other folks, a vixen and a buck, to drop in with their occasional stories. You won't be disappointed. We close the show today with Cap'n Jim, who tells us about a stop at Vanuatu on his second circumnavigation with his 55-foot catamaran Outremer. Jim provides comfort food for villagers of Unpongar mourning the death of an elder and receives a surprise gift in return. WHAT ARE YOU GOING TO DO WITH THE REST OF YOUR LIFE?Give us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
Alexander Unzicker is a theoretical physicist, historian, and author whose award-winning work focuses on the great unanswered questions of physics. He's also the host of Unzicker's Real Physics ( @TheMachian ) where he explores the path forward after a century of particle madness. We talk variable speed of light, the veil of differential geometry, why unification is so difficult, the bright line between mathematics and reality, and much, much more. Support the channel and Dr. Unzicker by buying one of his books: https://amzn.to/3KfcMbL Support the scientific revolution by joining our Patreon: https://bit.ly/3lcAasB Check out our @MaterialAtomics animation of Spin 1/2: https://youtu.be/CWjGO8sukpA Let us know what you think in the comments or on our Discord: https://discord.gg/MJzKT8CQub (00:00:00) Go! (00:05:03) Variable Speed of Light (00:18:27) Einstein, Eddington & Differential Geometry (00:26:41) Refraction (00:41:28) Vacuum Energy & the Mathematical Universe (00:50:47) Size of the Universe (00:59:54) Einstein & Feynman (01:08:56) Simplification & Intuition (01:18:08) Renormalization (01:28:25) The Big Picture (01:34:59) Unification and the Large Number Hypothesis (01:50:15) Unsolved Problems (01:56:44) Incompleteness & Boundary Conditions (02:04:21) Quaternions & the Gods of Modernity (02:10:54) Closing Thoughts #physics #atomic #quantum Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Michael Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
WARNING. This episode deals with domestic violence. There is brief use of strong language.The Bucks welcome Bridey Thelen- Heidel, author and award-winning teacher, who reads her wonderfully-written true-life story, We Owned the Night. This story was published this month in Mutha Magazine and you can read the story here. Her story relates her growing up in an abusive household and is extracted from her upcoming memoir, Bright Eyes. Bridey has a blog, also called Bright Eyes which you can access here.Bridey's Twitter handle is @BrideyHeidelBridey mentions Bruce Rettig and his novel, Refraction. Learn more here.Bridey also mentions a memoir, In the Shadow of the Valley by Bobi Conn. Find out more here.Find out about Tahoe Writers Works here.We look forward to your comments on this episode. You can email us, as always, and now you can leave a voice message simply by clicking on the link below.We managed to garble some of Bridey's words so blame Dave in editing for that.Give us your thoughts: BUCKSTWOOLD@GMAIL.COM Find us on Twitter: @twooldbucks1Leave a Voice message - click HERE
Bruce Rettig recently published Refraction, An Arctic Memoir. Refraction is a Pushcart prize nominee, and has received recognition and multiple awards including an award for non-fiction with the San Francisco Writing Contest, an International Chanticleer Book Award and a Pacific Northwest Writers Association Literary Award. Bruce also writes literary short stories, creative non-fiction, essays and flash fiction/nonfiction. He continues to be at the helm of his advertising and graphic design agency with the American Indian Alaska Native Tourism Association as an important client. Refraction recounts his experiences as a young man working in Prudhoe Bay. His writing includes both the human intensity of heavy industry as well as the vastness of the non-human world. Bruce defines "Refraction" and why he chose it as the title for his memoirEarly experiences as a new hire on the North SlopeThe complexities of a major industrial push in a harsh, demanding environmentRemembering a couple of notable characters as co-workers, Lee and SwanReads an excerpt from Refraction, "The Dynamics of Steel and Ice"Relates some of the properties of arctic ice, reading an excerpt from the chapter "The Properties of Ice"Barter Island and the Inuit village of Kaktovik: Rescuing the Crowley Prudhoe Bay fleet and getting to know some of the Kaktovik villagersComplexity and paradox: decisions, choices and divergent paths: thoughts on the fossil fuel eraThe importance of conversation and listening: "We all share the same home"Show notes at https://alaskastoryproject.comBruce Rettig: https://brucerettig.com/Special thanks to Christian Arthur for his music: https://christianarthur.com