POPULARITY
The Big Mates provide wayward and explosionary audio commentary for Arctic Monkeys Live at Liquid Room, Tokyo 2009.Adam, Steve, and Lucas discuss the 2009 concert and talk about the Humbug tour in general!Head to YouTube to watch along here: https://www.youtube.com/watch?v=fQqNc8u1vuE&tOr don't watch along - the episode works fine either way!Our next episode comes out Monday September 15th!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including shows about Manic Street Preachers and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates discuss third albums, dinosaurs, jingles, and Humbug by Arctic Monkeys. Adam, Steve, and Lucas continue their exploration of the career of Arctic Monkeys and conclude their track-by-track analysis of their third album. They explore the composition, tone, lyrics, meaning, and context for each song in turn - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!What do we think of the album? What makes a song memorable? What came first, the chicken or the dickhead? Find out on this episode of What Is Music?Our next episode is out next week, Monday September 8th!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
Summary: Perfectly labeled outcomes in training can still boost reward hacking tendencies in generalization. This can hold even when the train/test sets are drawn from the exact same distribution. We induce this surprising effect via a form of context distillation, which we call re-contextualization: Generate model completions with a hack-encouraging system prompt + neutral user prompt. Filter the completions to remove hacks. Train on these prompt-completion pairs with the system prompt removed. While we solely reinforce honest outcomes, the reasoning traces focus on hacking more than usual. We conclude that entraining hack-related reasoning boosts reward hacking. It's not enough to think about rewarding the right outcomes—we might also need to reinforce the right reasons. Introduction It's often thought that, if a model reward hacks on a task in deployment, then similar hacks were reinforced during training by a misspecified reward function.[1] In METR's report on reward hacking [...] ---Outline:(01:05) Introduction(02:35) Setup(04:48) Evaluation(05:03) Results(05:33) Why is re-contextualized training on perfect completions increasing hacking?(07:44) What happens when you train on purely hack samples?(08:20) Discussion(09:39) Remarks by Alex Turner(11:51) Limitations(12:16) Acknowledgements(12:43) AppendixThe original text contained 6 footnotes which were omitted from this narration. --- First published: August 14th, 2025 Source: https://www.lesswrong.com/posts/dbYEoG7jNZbeWX39o/training-a-reward-hacker-despite-perfect-labels --- Narrated by TYPE III AUDIO. ---Images from the article:
The Big Mates discuss engineering, confectionery, ego mechanics, and Humbug by Arctic Monkeys. Adam, Steve, and Lucas continue their exploration of the career of Arctic Monkeys and their track-by-track analysis of their third album. They explore the composition, tone, lyrics, meaning, and context for each song in turn - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!What is "crying lightning"? Who is the icky man? What's wrong with ice cream men? Find out on this episode of What Is Music?Our next episode is out next week, Monday August 25th, and will continue the deep-dive into Humbug!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates discuss adventures, break ups, expectations, and Humbug by Arctic Monkeys. Adam, Steve, and Lucas continue their exploration of the career of Arctic Monkeys by diving into the band's divisive, murky, and spooky third album.They discuss the writing and recording of the album, explore the overall themes and sound, and (barely) begin their track-by-track analysis, exploring the composition, tone, lyrics, meaning, and context for each song in turn - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!They also find time to remember their own reactions to the album when it was released, the public perception of the change in sound, and try to figure out which genre of rock is most appropriate to describe the vibe. What kind of questions should art pose? Does Jamie have a beard? Are the Tings Tings Lucas' quicksand? Find out on this episode of What Is Music?Our next episode is out next week, Monday August 25th, and will continue the deep-dive into Humbug!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates discuss Burt Bacharach, The Zombies, David Bowie, Scott Walker, Belle & Sebastian, Serge Gainsbourg, The Electric Prunes, Herb Alpert & the Tijuana Brass, John Barry, Ennio Morricone, Astrud Gilberto, Mel Torme, The Sonics, and The Age of the Understatement by The Last Shadow Puppets. Adam, Steve, and Lucas continue their exploration of the career of Arctic Monkeys by briefly stepping outside of it in order to examine the debut album of a side project Alex Turner formed with Miles Kane.They discuss how the project came together, the writing and recording of the album, and they explore the overall themes and sound, taking in the compositions, tones, lyrics, meaning, and context - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!They also look at how the side project was informed by, and goes on to influence the music made by Arctic Monkeys.How different is this album to Turner's previous work? How tongue in cheek is it? How many strings does a bass guitar have? Find out on this episode of What Is Music?Our next episode is out next week, Monday August 18th!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. This research note overviews some early results which we are looking for feedback on. TL;DR: We train language models with RL in toy environments. We show that penalizing some property of the output is sufficient to suppress that property in the chain of thought also, even when that property is relevant to task completion. For example, when we penalize a model for mentioning in its output that it completed a task via a certain form of cheating, its reasoning also omits this fact. We therefore suspect that training against an output monitor is sufficient to cause obfuscated CoTs in more realistic settings. Introduction Chain of thought (CoT) supervision appears in many control and scalable oversight protocols. It has been argued that being able to monitor CoTs for unwanted behavior is a critical property [...] ---Outline:(00:56) Introduction(02:38) Setup(03:48) Single-Turn Setting(04:26) Multi-Turn Setting(06:51) Results(06:54) Single-Turn Setting(08:21) Multi-Turn Terminal-Based Setting(08:25) Word-Usage Penalty(09:12) LLM Judge Penalty(10:12) Takeaways(10:57) AcknowledgementsThe original text contained 1 footnote which was omitted from this narration. --- First published: July 30th, 2025 Source: https://www.lesswrong.com/posts/CM7AsQoBxDW4vhkP3/optimizing-the-final-output-can-obfuscate-cot-research-note --- Narrated by TYPE III AUDIO. ---Images from the article:
The Big Mates discuss CM Punk, Ennio Morricone, Tommy Lee Jones, and Favourite Worst Nightmare by Arctic Monkeys. Adam, Steve, and Lucas continue and conclude their discussion of the band's second album, providing analysis and opinion as they finish up their track-by-track exploration.They discuss the final tracks of the album and then offer up their thoughts and feelings on the album as a whole, give it a score out of ten, and then explore the how the album was received when it came out, and how that perception may have changed over time.They also find time to talk about the band's tour, their headline set at Glastonbury Festival, and their huge shows at Old Trafford.Is this Adam's last episode? What are we teasing? What do you mean you've never seen The Wizard of Oz? Find out on this episode of What Is Music?Our next episode is out next week, Monday August 4th, and will see us provide commentary for Arctic Monkeys Live at the Apollo!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates discuss Robot Wars, bingo, innuendo, and Favourite Worst Nightmare by Arctic Monkeys. Adam, Steve, and Lucas continue their discussion of the band's second album, going track-by-track to provide analysis and opinion.They talk about rhythmic expansion, holiday word games, pedal steel guitar, crazy parties, face coverings, tense relationships, and break ups as they explore the composition, tone, lyrics, meaning, and context for each song in turn - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!How horny is the album? Why does Adam hate Fluorescent Adolescent? How berserk is this house, exactly? Find out on this episode of What Is Music?Our next episode is out next week, Monday July 28th, and will conclude the deep-dive into Favourite Worst Nightmare!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the showSupport the show
The Big Mates discuss Avril Lavigne, Stewie Griffin, Duran Duran, and Favourite Worst Nightmare by Arctic Monkeys. Adam, Steve, and Lucas begin their dissection of the band's follow up to the hugely successful debut by talking about awards ceremonies, the mounting pressure they must have felt, and the attitude going into the creation of the record. They discuss the writing, rehearsal, and recording of the album, explore the overall themes and sound, and begin their track-by-track analysis, exploring the composition, tone, lyrics, meaning, and context for each song in turn - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!How did the band expand their sound? How do you eat a creme egg? Will Arctic Monkeys take our advice on what to do after their first album? Find out on this episode of What Is Music?Our next episode is out next week, Monday July 21st, and will continue the deep-dive into Favourite Worst Nightmare!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates provide insightful and tangential audio commentary for Arctic Monkeys live Reading Festival 2006. Adam, Steve, and Lucas discuss the concert as broadcast on BBC television and radio, and then lovingly stitched together by YouTuber Alex Turner (not that one). They also discuss Reading Festival in general, the festival landscape at the time, and Lucas' attendance at this very show!Here is the video we watch, if you'd like to watch along:https://www.youtube.com/watch?v=8O_A59PIMt8Or don't watch along - the episode works fine either way!Our next episode comes out Monday July 14th and will begin the deep-dive into Favourite Worst Nightmare!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including shows about Manic Street Preachers and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
From dynamic pricing and VIP packages to ‘priority entry', there seem to be more ways than ever to squeeze money out of music fans - and that's not even to mention the sky-rocketing cost of a basic standing ticket. The Times' Jonathan Dean has been investigating why the cost of seeing some of our favourite artists has gone stratospheric, and whether the company Live Nation is to blame. This podcast was brought to you thanks to the support of readers of The Times and The Sunday Times. Subscribe today: http://thetimes.com/thestoryGuest: Jonathan Dean Host: Luke JonesProducer: Hannah VarrallFurther reading: How did Beyoncé tour tickets go from a small fortune to $25 each?Johnny Depp: ‘I was a crash test dummy for MeToo'Further listening: The making of Beyoncé – by friends, family and Team Bey insidersMusic:Bad Guy by Billie Eilish (written by Finneas O'Connell, Billie Eilish O'Connell, published by Darkroom/Interscope Records, Kobalt Music Publishing) Sweet Child O' Mine by Guns N' Roses (written by Duff McKagan, Jeffrey Isbell, Saul Hudson, Steven Adler, W. Axl Rose, published by Guns N Roses P&D)505 by The Arctic Monkeys (written by Alex Turner, Arctic Monkeys, published by Domino Recording Co)Clips: Sky News, This Morning (ITV), BBC News, Channel 4 News, Adam Webb from FanFair Alliance, Reg Walker, ABC News, CBS News, The Cure (YouTube), BBC Music (YouTube)Photo: Getty ImagesGet in touch: thestory@thetimes.com Hosted on Acast. See acast.com/privacy for more information.
The Big Mates discuss The 1975, My Chemical Romance, Klaxons, and Who the Fuck Are Arctic Monkeys? by Arctic Monkeys. Adam, Steve, and Lucas continue their deep-dive into the career of Arctic Monkeys by exploring what happened in the immediate aftermath of their hugrly successful debut album.They discuss the EP Who the Fuck Are Arctic Monkeys?, exploring each track in turn and offering analysis, opinion, and various tengential thoughts. They then turn their attention to the band's 2006 standalone single, Leave Before the Lights Come On.They also find time to talk about the NME Awards tour, the departure of bassist Andy Nicholson, and his replacement, Nick O'Malley.What is in Red Bull? What is at arcticmonkeys.com? Is 5 more than 4? Find out on this episode of What Is Music?Our next episode is out next week, Monday July 7th, and will feature commentary for Arctic Monkeys' performance at Reading Festival 2006.Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
Dos cañonazos de Bleeker abren esta sesión de Turbo 3 en la que te presentamos a un nuevo dúo, Boo Boos, tándem formado por Mark Oliver Everett de Eels y Kate Mattison de 79.5; escuchamos las últimas novedades de Wer Leg (nuevo avance de su próximo disco), Alexandra Savior y GLU, y la colaboración entre Vangoura y Repion en el single '40 de mayo'.Playlist:BLEEKER - Self-MadeBLEEKER - Fuck You I'm LeavingWET LEG - Davina McCallVANGOURA - 40 de mayo (feat. Repion)REPION - Viernes (con Rufus T. Firefly)WEDNESDAY - Wound Up Here (By Holdin On)BIG THIEF - IncomprehensibleALEXANDRA SAVIOR - The MothershipBOO BOOS - C'mon BabySHE & HIM - Why Do You Let Me Stay Here?ALEX KAPRANOS & CLARA LUCIANI - Summer WineISOBEL CAMPBELL & MARK LANEGAN - Come on Over (Turn Me On)IMELDA MAY - What We Did In The Dark (feat. Miles Kane)MILES KANE - Love Is CruelMILES KANE - LoadedMILES KANE - Nothing's Ever Gonna Be Good Enough (feat. Corinne Bailey Rae)MILES KANE - Better Than ThatMILES KANE - InhalerMILES KANE - BaggioçTHE LAST SHADOW PUPPETS - Standing Next To MeARCTIC MONKEYS - One For The RoadMINI MANSIONS - Vertigo (feat. Alex Turner)GLU - Gone Fishin' (feat. Phantogram)PHANTOGRAM - You Don't Get Me High AnymoreTWENTY ONE PILOTS - The ContractLINKIN PARK - Heavy Is the CrownTURNSTILE - Seein' StarsTURNSTILE - DullEscuchar audio
The Big Mates discuss Shane Meadows, Stephen Graham, Paddy Considine, and Whatever People Say I Am, That's What I'm Not by Arctic Monkeys. Adam, Steve, and Lucas continue and conclude their discussion of the band's debut album, providing analysis and opinion as they finish up their track-by-track exploration.They discuss the final tracks of the album and then offer up their thoughts and feelings on the album as a whole, give it a score out of ten, and then explore the enormous success of the album, the critical reaction at the time of release, and how it has stood the test of time.They also find time to talk about the mid-00s cultural turning point, the future of Arctic Monkeys, and songs from the past that remind them of this album.What will we make of the album? Do you sing along in an accent? Who is the world's biggest Ke$ha fan? Find out on this episode of What Is Music?Our next episode is out next week, Monday June 30th, and will see us discuss the band's 2006 EP, Who the Fuck Are Arctic Monkeys? Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
Special Guests: • Neil & Alex Turner, Okie School of Adventure • Jay Yelas, Cast for Kids-ED • Ray Sasser (1948 to 2018) Meet Kinder Outdoors Pro Staff...
The Big Mates discuss Peter Buck, train love, traditional policemen, and Whatever People Say I Am, That's What I'm Not by Arctic Monkeys. Adam, Steve, and Lucas continue their discussion of the band's debut album, going track-by-track to provide analysis and opinion.They also find time to talk about the origins of the post-punk revival scene, the NME's influence on the music scene, and the meanness of 00s culture - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!What is the post-punk revival? What's the difference between cheeky and evil? How often do we talk about Austin Powers? Find out on this episode of What Is Music?Our next episode is out next week, Monday June 23rd, and will conclude the deep-dive into Whatever People Say I Am, That's What I'm Not!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates discuss concept albums, Chris Tarrant, the alt. rock pipeline, and Whatever People Say I Am, That's What I'm Not by Arctic Monkeys. Adam, Steve, and Lucas begin their dissection of the band's debut album by looking at the writing and recording process, the people who helped them make it, and the musical landscape at the time.They also find time to talk about the band's 2005 tour, and their first appearance at Reading and Leeds Festival, before starting the track-by-track analysis of the album, exploring the composition, sound, lyrics, meaning, and context for each song in turn - all from three differing perspectives, from being deeply into music and analysis, to not caring for art or critique, and everything in between!What is a concept album? What kind of dancefloor are they singing about? Would the band have reached their fifth album if they hadn't released their first album? Find out on this episode of What Is Music?Our next episode is out next week, Monday June 16th, and will continue the deep-dive into the band's debut album!Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
The Big Mates discuss bouncers, poets, rappers, and Five Minutes With Arctic Monkeys. Adam, Steve, and Lucas properly begin their deep-dive exploration of Arctic Monkeys by talking about the people that make up the band, the various musical influences they brought with them, the city they grew up in, and the formation of the band itself.They discuss the early days of the band, their gigs and demo recordings, their songs being shared on the internet, and the first official Arctic Monkeys release, Five Minutes With Arctic Monkeys, featuring analysis of both tracks!It's and exciting first step in this career-long deep-dive from three differing perspectives on music, from being deeply into music and analysis, to not caring for art or critique, and everything in between!What's the difference between guitar and drums? Would you go back to a club to get your coat? Will we become one of the world's biggest rock bands in the next 4 years? Find out on this episode of What Is Music?Our next episode is out next week, Monday June 9th, and we'll begin the deep-dive into the band's debut album, Whatever People Say I Am, That's What I'm Not. Join the conversation on:Bluesky: https://bsky.app/profile/whatismusicpod.bsky.socialThreads: https://www.threads.net/@whatismusicpodInstagram: https://www.instagram.com/whatismusicpodE-mail: whatismusicpod@gmail.comGet access to more shows, exclusive bonus content, ad-free episodes of this show, and more music discussion by subscribing to our Patreon!Head to patreon.com/whatismusicpod and receive up to two new episodes of our various shows every week (including our album club and monthly themed playlists!), ad-free archives of What Is Music?, and access to our Patron-only Discord server for even more music (and non-music) discussion!Support our show when starting your own podcast!By signing up to Buzzsprout with this link: https://www.buzzsprout.com/?referrer_id=780379Check out our merch!https://whatismusicpod.redbubble.comDonate to our podcast!https://ko-fi.com/whatismusichttp://whatismusic.buzzsprout.com/Support the show
Arrancamos con una gran noticia: Ángel Stanich ha iniciado su regreso; a través de sus redes sociales, el músico ha anunciado que el 5 de junio lanzará el primer adelanto de su próximo disco y en Turbo 3 lo celebramos escuchando tres de los mejores cañonazos de su carrera. Además, escuchamos la colaboración entre St. Vincent y Mon Laferte traduciendo al español 'Violent Times' del último álbum de la artista neoyorquina, 'All Born Screaming', y las últimas novedades de GLU (proyecto paralelo de Michael Shuman, bajista de Queens Of The Stone Age), Himalayas (remezclados por Wargasm UK), Wet Leg y Guitarricadelafuente.Playlist:ÁNGEL STANICH - Una temporada en el infiernoÁNGEL STANICH - Escupe fuegoÁNGEL STANICH - Metralleta JoeCARLOS ARES - La boca del loboGUITARRICADELAFUENTE - Babieca!VEINTIUNO - Perder los modalesSAM FENDER - People Watching (Live At The O2 Arena, London)THE WAR ON DRUGS - I Don't Live Here Anymore (feat. Lucius)LUCIUS - Old Tape (feat. Adam Granduciel)JAXSON GAMBLE - Let's GoHIMALAYAS - What If...? (Wargasm UK Remix)HIMALAYAS - A Brand New GodMUSE - Time is Running OutST. VINCENT - Broken ManST. VINCENT - Tiempos violentos (feat. Mon Laferte)PORTISHEAD - Sour TimesMINI MANSIONS - Vertigo (feat. Alex Turner)GLU - Love You To PiecesGLU - Boogie ManGORILLAZ - Cracker Island (feat. Thundercat)WET LEG - CPRARCADE FIRE - Circle of TrustARCADE FIRE - ReflektorQUEENS OF THE STONE AGE - Smooth SailingEscuchar audio
In this episode of “This Is Purdue,” we're talking to Alex Turner, Purdue alum and design engineer at Dallara. Alex is a 2022 graduate of Purdue's motorsports engineering program and has used his skills and experience to earn his dream job at Dallara's U.S. headquarters in Indianapolis, just steps away from the Indianapolis Motor Speedway — home of the Indy 500. In this episode you will: Learn about the motorsports engineering program at Purdue University in Indianapolis and the opportunities available to students through the new Dallara partnership Hear how his passion for IndyCar racing led him to the motorsports engineering program at Purdue University in Indianapolis Discover how Alex's journey as a student in Indianapolis and his industry internships helped him land his current role at Dallara Listen to exclusive stories from the IndyCar engineer, including his family ties to the Indy 500 and his favorite race-day memories of “The Greatest Spectacle in Racing” Find out about the innovation and collaboration that goes into being a Dallara design engineer, including what a typical day in his life looks like Learn about Dallara's rich history with IndyCar as the exclusive chassis provider for every car on the grid since 2008 You don't want to miss this special episode that takes you behind the scenes of the world's fastest racing.
The Daily Quiz - Music Today's Questions: Question 1: Which English rock band formed in Manchester in 1991 released the song 'Don't Look Back in Anger'? Question 2: Before going solo, what band was Michael Jackson a member of? Question 3: 'Witchy Woman' was a hit for which rock group in the 1970's? Question 4: Which American rock band released the song 'Wouldn't It Be Nice'? Question 5: Which Irish rock band released the song 'I Still Haven't Found What I'm Looking For'? Question 6: Which alternative rock band is fronted by Alex Turner? Question 7: Who produced David Bowie's 1983 album "Let's Dance"? Question 8: Which American singer, songwriter, dancer and actress released the studio album 'Oops!… I Did It Again'? This podcast is produced by Klassic Studios Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on Rockonteurs, we are delighted to welcome Luke Pritchard from The Kooks to the show.The Kooks were formed in Brighton in the mid 00s and their 2006 debut album ‘Inside In, Inside Out' sold 2 million copies in the UK alone. Their follow up ‘Konk' was another UK No.1 album and they are enjoying a new resurgence as social media has made them a generations new favourite band again. Luke joins Gary and Guy to talk about his influences from Dylan to the Stones, how success so early on in their career was a double-edged sword, how he once kicked Alex Turner from the Arctic Monkey's in the face at a gig and embarrassed himself in front of a Beatle! Never / Know is the new album from The Kooks and is out on May 9th. Find out more about the album here: https://thekooks.lnk.to/NeverKnowAlbumSRInstagram @rockonteurs @thekooksmusic @guyprattofficial @garyjkemp @gimmesugarproductions Listen to the podcast and watch some of our latest episodes on our Rockonteurs YouTube channel.YouTube: https://www.youtube.com/@rockonteursFacebook: https://www.facebook.com/RockonteursTikTok: https://www.tiktok.com/@therockonteursProduced for WMG UK by Ben Jones at Gimme Sugar Productions Hosted on Acast. See acast.com/privacy for more information.
This week on Rockonteurs, we are delighted to welcome Luke Pritchard from The Kooks to the show.The Kooks were formed in Brighton in the mid 00s and their 2006 debut album ‘Inside In, Inside Out' sold 2 million copies in the UK alone. Their follow up ‘Konk' was another UK No.1 album and they are enjoying a new resurgence as social media has made them a generations new favourite band again. Luke joins Gary and Guy to talk about his influences from Dylan to the Stones, how success so early on in their career was a double-edged sword, how he once kicked Alex Turner from the Arctic Monkey's in the face at a gig and embarrassed himself in front of a Beatle! Never / Know is the new album from The Kooks and is out on May 9th. Find out more about the album here: https://thekooks.lnk.to/NeverKnowAlbumSRInstagram @rockonteurs @thekooksmusic @guyprattofficial @garyjkemp @gimmesugarproductions Listen to the podcast and watch some of our latest episodes on our Rockonteurs YouTube channel.YouTube: https://www.youtube.com/@rockonteursFacebook: https://www.facebook.com/RockonteursTikTok: https://www.tiktok.com/@therockonteursProduced for WMG UK by Ben Jones at Gimme Sugar Productions Hosted on Acast. See acast.com/privacy for more information.
Hoy, con motivo del aniversario del disco debut de los Last Shadow Puppets, The Age of the Understatement (publicado el 15 de abril de 2008), recuperamos un breve especial para de la banda paralela de Alex Turner y Miles Kane. También recordaros que ya podéis comprar La gran travesía del rock, un libro interactivo que además contará con 15 programas de radio complementarios, a modo de ficción sonora... con muchas sorpresas y voces conocidas... https://www.ivoox.com/gran-travesia-del-rock-capitulos-del-libro_bk_list_10998115_1.html Jimi y Janis, dos periodistas musicales, vienen de 2027, un mundo distópico y delirante donde el reguetón tiene (casi) todo el poder... pero ellos dos, deciden alistarse al GLP para viajar en el tiempo, salvar el rock, rescatar sus archivos ocultos y combatir la dictadura troyana del FPR. ✨ El libro ya está en diversas webs, en todostuslibros.com Amazon, Fnac y también en La Montaña Mágica, por ejemplo https://www.amazon.es/GRAN-TRAVES%C3%8DA-DEL-ROCK-autoestopista/dp/8419924938 ▶️ Y ya sabéis, si os gusta el programa y os apetece, podéis apoyarnos y colaborar con nosotros por el simple precio de una cerveza al mes, desde el botón azul de iVoox, y así, además podéis acceder a todo el archivo histórico exclusivo. Muchas gracias también a todos los mecenas y patrocinadores por vuestro apoyo: Poncho C, Don T, Francisco Quintana, Gastón Nicora, Con, Piri, Dotakon, Tete García, Jose Angel Tremiño, Marco Landeta Vacas, Oscar García Muñoz, Raquel Parrondo, Javier Gonzar, Eva Arenas, Poncho C, Nacho, Javito, Alberto, Pilar Escudero, Blas, Moy, Dani Pérez, Santi Oliva, Vicente DC,, Leticia, JBSabe, Flor, Melomanic, Arturo Soriano, Gemma Codina, Raquel Jiménez, Pedro, SGD, Raul Andres, Tomás Pérez, Pablo Pineda, Quim Goday, Enfermerator, María Arán, Joaquín, Horns Up, Victor Bravo, Fonune, Eulogiko, Francisco González, Marcos Paris, Vlado 74, Daniel A, Redneckman, Elliott SF, Guillermo Gutierrez, Sementalex, Miguel Angel Torres, Suibne, Javifer, Matías Ruiz Molina, Noyatan, Estefanía, Iván Menéndez, Niksisley y a los mecenas anónimos.
Die Arctic Monkeys sind die größte Rock'n'Roll-Band unserer Zeit – und das, obwohl sie seit über zehn Jahren keinen Rock'n'Roll mehr machen. Also zumindest auf Platte. Live dagegen füllen sie Arenen und Stadien, Sänger Alex Turner ist die Wiedergeburt des Brit-Rock für Nostalgiker und ein Rebel-Heartthrob für Teenagerinnen. Auf den letzten beiden Alben hat der charismatische Frontmann allerdings das Croonen entdeckt, sowie elegante Lounge- und Orchester-Musik der 70er Jahre, und damit auch den letzten Musik-Kritiker für sich gewonnen. Das war bereits der mindestens zweite radikale Kurswechsel, nachdem die Band aus Sheffield in den goldenen Nullerjahren des Indie eine der Go-To-Adressen war, jeden „Dancefloor“ erobert hat bis die Sonne wieder auf- oder das Licht anging. Dann ging es auf die Rancho de la Luna zu Josh Homme von den Queens of the Stone Age, mit ihm entdeckten sie den Sound der kalifornischen Wüste. Was langfristig im bis heute erfolgreichsten Album mündete, ihrem vor Riffs, Verzerrern und Lovesongs nur so strotzendem Signature-Album „AM“. „Do I Wanna Know... all about the Arctic Monkeys? Wer diese Frage mit „Hell Yeah!“ beantwortet, ist bei dieser Folge genau richtig. In Episode #103ArcticMonkeys kommt mit Philipp Kressmann ein absoluter Indie-Experte und Fan der ersten Stunde vorbei. Alex Turner, Jamie Cook und Nick O'Malley aus der Band erzählen von ihren großen Alben. Jetzt überall, wo es Podcasts gibt.
Top 51s, Tenuous Spireite in Last One Laughing, A BRAND NEW SONG, Osman Kakay's House of Games, featurning Alex Turner presenting SINGONYMS and a middling Whelan Fortuné.
Miles Kane spills the beans on his musical inspiration, stories from his career, his love for Alex Turner and Roberto Baggio & performs live in the caff!New episodes out weekly, subscribe for more!Produced by Face For Radio Media.
British comedian Sean McLoughlin was recently in Lisbon to open for Ricky Gervais, in the “Mortality” tour. He had already performed at a sold out MEO Arena in 2023, when the creator of “The Office” took to the stage in Portugal for the first time, during the “Armageddon” tour. Sean McLoughlin has also performed his own solo show in Portugal and will be back in june, with “White Elephant”. In Lisbon, on the 23rd and 24th of june, and in Porto, on the 22nd of november. In an interview with Gustavo Carvalho, for “Humor À Primeira Vista”, he explains how he became Ricky Gervais' opener, presents a theory that justifies the success of stand-up comedy around the world, reveals the behind the scenes of a performance in Los Angeles, for 17.000 people, that felt more like an Apollo mission and reveals that, despite not living in Portugal, he is not that far from it. This interview was conducted in english. Two versions are available: one original, in english; one translated to portuguese. This is the english version. You can listen to the portuguese version hereSee omnystudio.com/listener for privacy information.
The midierror meets... interview series is back with Sonicstate, speaking to all kinds of people working in music and sound. For this episode we hear from Matt Cox of Gravity Rigs - who has been the Chemical Brothers go-to MIDI & Keyboard tech for over 30 years, still touring with them since the mid-90s! He's also worked with The Prodigy, Hot Chip, Disclosure, Bicep, Eric Prydz, Pendulum, The Pet Shop Boys, New Order, Orbital and many more - ensuring they have rock-solid, bullet-proof live performance systems. Gravity Rigs was co-founded with Alex Turner; together they design and build rigs for the biggest artists in the world, many of which are detailed on their website. https://gravityrigs.com/ This is series 2, episode 1 and there are 50 previous episodes available now featuring Fatboy Slim, CJ Bolland, Andrew Huang, Tim Exile, High Contrast, Mylar Melodies, Infected Mushroom, DJ Rap, John Grant and many more. Available on Soundcloud, Spotify, Apple Music and Bandcamp.
Alex Turner of the Animal and Plant Health Inspection Service explains efforts over the past year to monitor and mitigate a strain of H5N1 Highly Pathogenic Avian Flu found in milk samples in dairy cattle.See omnystudio.com/listener for privacy information.
Deep Cuts grabs the “people's mic” in this episode about 2011. Tune in and find out which host thinks he's bringing sexy back with his pick and which host loves sea shanties sung by baristas who were liberal arts majors. Featuring tracks by Class Actress, Alex Turner, Smith Westerns, Duke Spirit and Cage the Elephant. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today, Polish Club frontman Novak joins host Jeremy Dylan to discuss the Arctic Monkey's divisive cult classic album ‘Tranquility Base Hotel and Casino', the sci-fi concept album that followed up the rock'n'roll behemoth of AM. Jeremy and Novak reminisce about their days as office-mates, Novak coming out as a singer at karaoke, ageing in rock'n'roll, why so many artists both love and envy this album, the artistic bravery of following their biggest commercial hit with a ‘jazzy concept album about eating pizza on the moon', the alternate reality where this was an Alex Turner solo album, how swerving musically helps sustain a long career and more. Listen to the new Polish Club album 'Heavy Weight Heart', out now!
In this week's episode of TWTW, we are transported back to the first decade of the new millennium, when the IT girl of the era hitched her wagon to the frontman de jour, and we were gifted with Alexa Chung and Alex Turner stepping out at Glasto and gracing the gossip pages. Comedian Amy Matthews is to thank for finally allowing us a good old dissecting of this much lamented king and queen of indie culture, so get your ballet pumps and skinny jeans on, and we'll see you down at Bungalow 8. Come and see TWTW LIVE at this year's Cheerful Earful festival in London on October 19th 2024! Full info and tickets can be found HERE! See you there XX Learn more about your ad choices. Visit megaphone.fm/adchoices
"Das Licht, der Herbst, die Tiefe." Die eigene Beschreibung von Saguru scheint nicht gerade zum Sommer zu passen. Tatsächlich singt er auch von Schneestürmen in schwarz-weiß, aber vor allem von der Liebe, und die richtet sich nun mal nicht nach Jahreszeiten. Mit leicht melancholischem Ton sind also auch Sagurus vertonte Emotionen universell und das ganze Jahr über passend. Saguru, das ist Chris Rappel aus München, beeinflusst von Vorbildern wie Alex Turner oder Bon Iver. Seine nächsteEP steht bereits in den Startlöchern und von dieser hat er gerade den zweiten Song veröffentlicht. Untermalt von einem elektronisch-sanften Mix aus weichen, geradezu verschwindenden Gitarren und verzerrten gedämpften Synthesizern, geht es in "True Love" um das Privileg der wahren Liebe; bei der man schnell alles andere um sich vergessen kann – wie den aktuellen Monat.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brief notes on the Wikipedia game, published by Olli Järviniemi on July 14, 2024 on LessWrong. Alex Turner introduced an exercise to test subjects' ability to notice falsehoods: change factual statements in Wikipedia articles, hand the edited articles to subjects and see whether they notice the modifications. I've spent a few hours making such modifications and testing the articles on my friend group. You can find the articles here. I describe my observations and thoughts below. The bottom line: it is hard to come up with good modifications / articles to modify, and this is the biggest crux for me. The concept Alex Turner explains the idea well here. The post is short, so I'm just copying it here: Rationality exercise: Take a set of Wikipedia articles on topics which trainees are somewhat familiar with, and then randomly select a small number of claims to negate (negating the immediate context as well, so that you can't just syntactically discover which claims were negated). For example: "By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception." > modified to "Contrary to early theories, newborn infants are not particularly adept at picking out their mother's voice from other voices. This suggests the absence of prenatal development of auditory perception." Sometimes, trainees will be given a totally unmodified article. For brevity, the articles can be trimmed of irrelevant sections. Benefits: Addressing key rationality skills. Noticing confusion; being more confused by fiction than fact; actually checking claims against your models of the world. If you fail, either the article wasn't negated skillfully ("5 people died in 2021" -> "4 people died in 2021" is not the right kind of modification), you don't have good models of the domain, or you didn't pay enough attention to your confusion. Either of the last two are good to learn. Features of good modifications What does a good modification look like? Let's start by exploring some failure modes. Consider the following modifications: "World War II or the Second World War (1 September 1939 - 2 September 1945) was..." -> "World War II or the Second World War (31 August 1939 - 2 September 1945) was... "In the wake of Axis defeat, Germany, Austria, Japan and Korea were occupied" -> "In the wake of Allies defeat, United States, France and Great Britain were occupied" "Operation Barbarossa was the invasion of the Soviet Union by..." -> "Operation Bergenstein was the invasion of the Soviet Union by..." Needless to say, these are obviously poor changes for more than one reason. Doing something which is not that, one gets at least the following desiderata for a good change: The modifications shouldn't be too obvious nor too subtle; both failure and success should be realistic outcomes. The modification should have implications, rather than being an isolated fact, test of memorization or a mere change of labels. The "intended solution" is based on general understanding of a topic, rather than memorization. The change "The world population is 8 billion" "The world population is 800,000" definitely has implications, and you could indirectly infer that the claim is false, but in practice people would think "I've previously read that the world population is 8 billion. This article gives a different number. This article is wrong." Thus, this is a bad change. Finally, let me add: The topic is of general interest and importance. While the focus is on general rationality skills rather than object-level information, I think you get better examples by having interesting and important topics, rather than something obscure. Informally, an excellent modification is such that it'd just be very silly to actually believe the false claim made, in t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How ARENA course material gets made, published by CallumMcDougall on July 3, 2024 on LessWrong. TL;DR In this post, I describe my methodology for building new material for ARENA. I'll mostly be referring to the exercises on IOI, Superposition and Function Vectors as case studies. I expect this to be useful for people who are interested in designing material for ARENA or ARENA-like courses, as well as people who are interested in pedagogy or ML paper replications. The process has 3 steps: 1. Start with something concrete 2. First pass: replicate, and understand 3. Second pass: exercise-ify Summary I'm mostly basing this on the following 3 sets of exercises: Indirect Object Identification - these exercises focus on the IOI paper (from Conmy et al). The goal is to have people understand what exploratory analysis of transformers looks like, and introduce the key ideas of the circuits agenda. Superposition & SAEs - these exercises focus on understanding superposition and the agenda of dictionary learning (specifically sparse autoencoders). Most of the exercises explore Anthropic's Toy Models of Superposition paper, except for the last 2 sections which explore sparse autoencoders (firstly by applying them to the toy model setup, secondly by exploring a sparse autoencoder trained on a language model). Function Vectors - these exercises focus on the Function Vectors paper by David Bau et al, although they also make connections with related work such as Alex Turner's GPT2-XL steering vector work. These exercises were interesting because they also had the secondary goal of being an introduction to the nnsight library, in much the same way that the intro to mech interp exercises were also an introduction to TransformerLens. The steps I go through are listed below. I'm indexing from zero because I'm a software engineer so of course I am. The steps assume you already have an idea of what exercises you want to create; in Appendix (1) you can read some thoughts on what makes for a good exercise set. 1. Start with something concrete When creating material, you don't want to be starting from scratch. It's useful to have source code available to browse - bonus points if that takes the form of a Colab or something which is self-contained and has easily visible output. IOI - this was Neel's "Exploratory Analysis Demo" exercises. The rest of the exercises came from replicating the paper directly. Superposition - this was Anthroic's Colab notebook (although the final version went quite far beyond this). The very last section (SAEs on transformers) was based on Neel Nanda's demo Colab). Function Vectors - I started with the NDIF demo notebook, to show how some basic nnsight syntax worked. As for replicating the actual function vectors paper, unlike the other 2 examples I was mostly just working from the paper directly. It helped that I was collaborating with some of this paper's authors, so I was able to ask them some questions to clarify aspects of the paper. 2. First-pass: replicate, and understand The first thing I'd done in each of these cases was go through the material I started with, and make sure I understood what was going on. Paper replication is a deep enough topic for its own series of blog posts (many already exist), although I'll emphasise that I'm not usually talking about full paper replication here, because ideally you'll be starting from something a it further along, be that a Colab, a different tutorial, or something else. And even when you are just working directly from a paper, you shouldn't make the replication any harder for yourself than you need to. If there's code you can take from somewhere else, then do. My replication usually takes the form of working through a notebook in VSCode. I'll either start from scratch, or from a downloaded Colab if I'm using one as a ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shard Theory - is it true for humans?, published by Rishika Bose on June 14, 2024 on The AI Alignment Forum. And is it a good model for value learning in AI? (Read on Substack: https://recursingreflections.substack.com/p/shard-theory-is-it-true-for-humans) TLDR Shard theory proposes a view of value formation where experiences lead to the creation of context-based 'shards' that determine behaviour. Here, we go over psychological and neuroscientific views of learning, and find that while shard theory's emphasis on context bears similarity to types of learning such as conditioning, it does not address top-down influences that may decrease the locality of value-learning in the brain. What's Shard Theory (and why do we care)? In 2022, Quintin Pope and Alex Turner posted ' The shard theory of human values', where they described their view of how experiences shape the value we place on things. They give an example of a baby who enjoys drinking juice, and eventually learns that grabbing at the juice pouch, moving around to find the juice pouch, and modelling where the juice pouch might be, are all helpful steps in order to get to its reward. 'Human values', they say, 'are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics…' And since, like humans, AI is often trained with reinforcement learning, the same might apply to AI. The original post is long (over 7,000 words) and dense, but Lawrence Chan helpfully posted a condensation of the topic in ' Shard Theory in Nine Theses: a Distillation and Critical Appraisal'. In it, he presents nine (as might be expected) main points of shard theory, ending with the last thesis: 'shard theory as a model of human values'. 'I'm personally not super well versed in neuroscience or psychology', he says, 'so I can't personally attest to [its] solidity…I'd be interested in hearing from experts in these fields on this topic.' And that's exactly what we're here to do. A Crash Course on Human Learning Types of learning What is learning? A baby comes into the world and is inundated with sensory information of all kinds. From then on, it must process this information, take whatever's useful, and store it somehow for future use. There's various places in the brain where this information is stored, and for various purposes. Looking at these various types of storage, or memory, can help us understand what's going on: 3 types of memory We often group memory types by the length of time we hold on to them - 'working memory' (while you do some task), 'short-term memory' (maybe a few days, unless you revise or are reminded), and 'long-term memory' (effectively forever). Let's take a closer look at long-term memory: Types of long-term memory We can broadly split long-term memory into 'declarative' and 'nondeclarative'. Declarative memory is stuff you can talk about (or 'declare'): what the capital of your country is, what you ate for lunch yesterday, what made you read this essay. Nondeclarative covers the rest: a grab-bag of memory types including knowing how to ride a bike, getting habituated to a scent you've been smelling all day, and being motivated to do things you were previously rewarded for (like drinking sweet juice). For most of this essay, we'll be focusing on the last type: conditioning. Types of conditioning Conditioning Sometime in the 1890s, a physiologist named Ivan Pavlov was researching salivation using dogs. He would feed the dogs with powdered meat, and insert a tube into the cheek of each dog to measure their saliva.As expected, the dogs salivated when the food was in front of them. Unexpectedly, the dogs also salivated when they heard the footsteps of his assistant (who brought them their food). Fascinated by this, Pavlov started to play a metronome whenever he ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shard Theory - is it true for humans?, published by ErisApprentice on June 14, 2024 on The AI Alignment Forum. And is it a good model for value learning in AI? (Read on Substack: https://recursingreflections.substack.com/p/shard-theory-is-it-true-for-humans) TLDR Shard theory proposes a view of value formation where experiences lead to the creation of context-based 'shards' that determine behaviour. Here, we go over psychological and neuroscientific views of learning, and find that while shard theory's emphasis on context bears similarity to types of learning such as conditioning, it does not address top-down influences that may decrease the locality of value-learning in the brain. What's Shard Theory (and why do we care)? In 2022, Quintin Pope and Alex Turner posted ' The shard theory of human values', where they described their view of how experiences shape the value we place on things. They give an example of a baby who enjoys drinking juice, and eventually learns that grabbing at the juice pouch, moving around to find the juice pouch, and modelling where the juice pouch might be, are all helpful steps in order to get to its reward. 'Human values', they say, 'are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics…' And since, like humans, AI is often trained with reinforcement learning, the same might apply to AI. The original post is long (over 7,000 words) and dense, but Lawrence Chan helpfully posted a condensation of the topic in ' Shard Theory in Nine Theses: a Distillation and Critical Appraisal'. In it, he presents nine (as might be expected) main points of shard theory, ending with the last thesis: 'shard theory as a model of human values'. 'I'm personally not super well versed in neuroscience or psychology', he says, 'so I can't personally attest to [its] solidity…I'd be interested in hearing from experts in these fields on this topic.' And that's exactly what we're here to do. A Crash Course on Human Learning Types of learning What is learning? A baby comes into the world and is inundated with sensory information of all kinds. From then on, it must process this information, take whatever's useful, and store it somehow for future use. There's various places in the brain where this information is stored, and for various purposes. Looking at these various types of storage, or memory, can help us understand what's going on: 3 types of memory We often group memory types by the length of time we hold on to them - 'working memory' (while you do some task), 'short-term memory' (maybe a few days, unless you revise or are reminded), and 'long-term memory' (effectively forever). Let's take a closer look at long-term memory: Types of long-term memory We can broadly split long-term memory into 'declarative' and 'nondeclarative'. Declarative memory is stuff you can talk about (or 'declare'): what the capital of your country is, what you ate for lunch yesterday, what made you read this essay. Nondeclarative covers the rest: a grab-bag of memory types including knowing how to ride a bike, getting habituated to a scent you've been smelling all day, and being motivated to do things you were previously rewarded for (like drinking sweet juice). For most of this essay, we'll be focusing on the last type: conditioning. Types of conditioning Conditioning Sometime in the 1890s, a physiologist named Ivan Pavlov was researching salivation using dogs. He would feed the dogs with powdered meat, and insert a tube into the cheek of each dog to measure their saliva.As expected, the dogs salivated when the food was in front of them. Unexpectedly, the dogs also salivated when they heard the footsteps of his assistant (who brought them their food). Fascinated by this, Pavlov started to play a metronome whenever h...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shard Theory - is it true for humans?, published by Rishika on June 14, 2024 on LessWrong. And is it a good model for value learning in AI? TLDR Shard theory proposes a view of value formation where experiences lead to the creation of context-based 'shards' that determine behaviour. Here, we go over psychological and neuroscientific views of learning, and find that while shard theory's emphasis on context bears similarity to types of learning such as conditioning, it does not address top-down influences that may decrease the locality of value-learning in the brain. What's Shard Theory (and why do we care)? In 2022, Quintin Pope and Alex Turner posted ' The shard theory of human values', where they described their view of how experiences shape the value we place on things. They give an example of a baby who enjoys drinking juice, and eventually learns that grabbing at the juice pouch, moving around to find the juice pouch, and modelling where the juice pouch might be, are all helpful steps in order to get to its reward. 'Human values', they say, 'are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics…' And since, like humans, AI is often trained with reinforcement learning, the same might apply to AI. The original post is long (over 7,000 words) and dense, but Lawrence Chan helpfully posted a condensation of the topic in ' Shard Theory in Nine Theses: a Distillation and Critical Appraisal'. In it, he presents nine (as might be expected) main points of shard theory, ending with the last thesis: 'shard theory as a model of human values'. 'I'm personally not super well versed in neuroscience or psychology', he says, 'so I can't personally attest to [its] solidity…I'd be interested in hearing from experts in these fields on this topic.' And that's exactly what we're here to do. A Crash Course on Human Learning Types of learning What is learning? A baby comes into the world and is inundated with sensory information of all kinds. From then on, it must process this information, take whatever's useful, and store it somehow for future use. There's various places in the brain where this information is stored, and for various purposes. Looking at these various types of storage, or memory, can help us understand what's going on: 3 types of memory We often group memory types by the length of time we hold on to them - 'working memory' (while you do some task), 'short-term memory' (maybe a few days, unless you revise or are reminded), and 'long-term memory' (effectively forever). Let's take a closer look at long-term memory: Types of long-term memory We can broadly split long-term memory into 'declarative' and 'nondeclarative'. Declarative memory is stuff you can talk about (or 'declare'): what the capital of your country is, what you ate for lunch yesterday, what made you read this essay. Nondeclarative covers the rest: a grab-bag of memory types including knowing how to ride a bike, getting habituated to a scent you've been smelling all day, and being motivated to do things you were previously rewarded for (like drinking sweet juice). For most of this essay, we'll be focusing on the last type: conditioning. Types of conditioning Conditioning Sometime in the 1890s, a physiologist named Ivan Pavlov was researching salivation using dogs. He would feed the dogs with powdered meat, and insert a tube into the cheek of each dog to measure their saliva.As expected, the dogs salivated when the food was in front of them. Unexpectedly, the dogs also salivated when they heard the footsteps of his assistant (who brought them their food). Fascinated by this, Pavlov started to play a metronome whenever he gave the dogs their food. After a while, sure enough, the dogs would salivate whenever the metronome played, even if ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Needs of Technical AI Safety Teams, published by Ryan Kidd on May 24, 2024 on The Effective Altruism Forum. Co-Authors: @yams @Carson Jones, @McKennaFitzgerald, @Ryan Kidd MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety orgs. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2] In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field. All interviews were conducted on the condition of anonymity. Needs by Organization Type Organization type Talent needs Scaling Lab (e.g., Anthropic, Google DeepMind, OpenAI) Safety Teams Iterators > Amplifiers Small Technical Safety Orgs ( Machine Learning (ML) Engineers Growing Technical Safety Orgs (10-30 FTE) Amplifiers > Iterators Independent Research Iterators > Connectors Here, ">" means "are prioritized over." Archetypes We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren't as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn't possible. We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities. Connectors / Iterators / Amplifiers Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding. Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds. Pure Connectors often have a long lead time before they're able to produce impactful work, since it's usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors. Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Needs in Technical AI Safety, published by yams on May 24, 2024 on LessWrong. Co-Authors: @yams, @Carson Jones, @McKennaFitzgerald, @Ryan Kidd MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety teams. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2] In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field. All interviews were conducted on the condition of anonymity. Needs by Organization Type Organization type Talent needs Scaling Lab (i.e. OpenAI, DeepMind, Anthropic) Safety Teams Iterators > Amplifiers Small Technical Safety Orgs ( Machine Learning (ML) Engineers Growing Technical Safety Orgs (10-30 FTE) Amplifiers > Iterators Independent Research Iterators > Connectors Archetypes We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren't as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn't possible. We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities. Connectors / Iterators / Amplifiers Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding. Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds. Pure Connectors often have a long lead time before they're able to produce impactful work, since it's usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors. Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional time has empowered him to make major contributions to a wide range of empir...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Big Picture AI Safety: Introduction, published by EuanMcLean on May 23, 2024 on LessWrong. tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held "traditional" views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer. What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many ideas and opinions are not written down anywhere, they exist only in people's heads and in lunchtime conversations at AI labs and coworking spaces. I set out to learn what the AI safety community believes about the strategic landscape of AI safety. I conducted 17 semi-structured interviews with a range of AI safety experts. I avoided going into any details of particular technical concepts or philosophical arguments, instead focussing on how such concepts and arguments fit into the big picture of what AI safety is trying to achieve. This work is similar to the AI Impacts surveys, Vael Gates' AI Risk Discussions, and Rob Bensinger's existential risk from AI survey. This is different to those projects in that both my approach to interviews and analysis are more qualitative. Part of the hope for this project was that it can hit on harder-to-quantify concepts that are too ill-defined or intuition-based to fit in the format of previous survey work. Questions I asked the participants a standardized list of questions. What will happen? Q1 Will there be a human-level AI? What is your modal guess of what the first human-level AI (HLAI) will look like? I define HLAI as an AI system that can carry out roughly 100% of economically valuable cognitive tasks more cheaply than a human. Q1a What's your 60% or 90% confidence interval for the date of the first HLAI? Q2 Could AI bring about an existential catastrophe? If so, what is the most likely way this could happen? Q2a What's your best guess at the probability of such a catastrophe? What should we do? Q3 Imagine a world where, absent any effort from the AI safety community, an existential catastrophe happens, but actions taken by the AI safety community prevent such a catastrophe. In this world, what did we do to prevent the catastrophe? Q4 What research direction (or other activity) do you think will reduce existential risk the most, and what is its theory of change? Could this backfire in some way? What mistakes have been made? Q5 Are there any big mistakes the AI safety community has made in the past or are currently making? These questions changed gradually as the interviews went on (given feedback from participants), and I didn't always ask the questions exactly as I've presented them here. I asked participants to answer from their internal model of the world as much as possible and to avoid deferring to the opinions of others (their inside view so to speak). Participants Adam Gleave is the CEO and co-founder of the alignment research non-profit FAR AI. (Sept 23) Adrià Garriga-Alonso is a research scientist at FAR AI. (Oct 23) Ajeya Cotra leads Open Philantropy's grantmaking on technical research that could help to clarify and reduce catastrophic risks from advanced AI. (Jan 24) Alex Turner is a research scientist at Google DeepMind on the Scalable Alignment team. (Feb 24) Ben Cottie...
Two of the youngest artists at Sound City, Yee Loi and Alex Turner, and Grace from gothic shredders of the north VENUS GRRRLS speaks about their youth music organisation. Become a member of Rough Trade Club New Music, and you'll receive 1/3 off Rough Trade's Album of the Month on exclusive varient. Head to http://roughtrade.com/club and use 'CLUB101POD' as your voucher. DistroKid makes music distribution fun and easy with unlimited uploads and artists keeping the ENTIRETY of their revenue. Get 30% off the first year of their service by signing up at https://distrokid.com/vip/101pod Get £50 off your weekend ticket to 2000 Trees festival: where The Gaslight Anthem, The Chats, Hot Mulligan and TONS of excellent bands are playing. Use 101POD at checkout: 2000trees.co.uk Learn more about your ad choices. Visit megaphone.fm/adchoices
Esta sesión se convierten en la antesala del Warm Up 2024, festival que se celebra en Murcia este viernes y sábado, 3 y 4 de mayo, y que puedes seguir a través de Radio 3, desde las 21h, hoy y mañana, con los comentarios de Julio Ródenas y Constan Sotoca. VIVA SUECIA – La OrillaVEINTIUNO – La ToscanaSIDONIE – No Salgo MásGINEBRAS – Alex TurnerDELAPORTE – Me La PeguéBOMBA ESTÉREO – FuegoJUDELINE – MangataSLEAFORD MODS ft BILLY NOMATES - Mork n MindyJOHNNY MARR – Easy MoneyARDE BOGOTÁ – Los PerrosMUJERES – No Puedo MásSEN SENRA – Meu AmoreLA LA LOVE YOU ft SAMURAï – El Principio de Algo (Innmir & Wisemen Project Remix)EDITORS – Karma ClimbBLACK LIPS – Make You MinePERRO – Gracias, de NadaCUPIDO - SantaEscuchar audio
'Pretty Visitors' showcases Matthew J Helders the Third at the height of his drumming mastery. The agile beast behind the kit takes centre stage, commanding attention with his incredible Drum Fills. Join us as we dissect the performance, exploring the nuances that make 'Pretty Visitors' a testament to his status as one of the finest drummers of his generation .The track also features one of Alex Turner's most memorable quips, injecting a dose of lyrical wit into the frenetic energy of the song. Don't Believe The Hype is written and produced by Nick Lee and Dan Holt. Sign up for our Patreon here: https://patreon.com/arcticpodcast Find all our links here: https://linktr.ee/arcticmonkeyspodcast Get in touch with the show at arcticmonkeyspodcast@gmail.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Many arguments for AI x-risk are wrong, published by Alex Turner on March 5, 2024 on The AI Alignment Forum. The following is a lightly edited version of a memo I wrote for a retreat. It was inspired by a draft of Counting arguments provide no evidence for AI doom. I think that my post covers important points not made by the published version of that post. I'm also thankful for the dozens of interesting conversations and comments at the retreat. I think that the AI alignment field is partially founded on fundamentally confused ideas. I'm worried about this because, right now, a range of lobbyists and concerned activists and researchers are in Washington making policy asks. Some of these policy proposals seem to be based on erroneous or unsound arguments.[1] The most important takeaway from this essay is that the (prominent) counting arguments for "deceptively aligned" or "scheming" AI provide ~0 evidence that pretraining + RLHF will eventually become intrinsically unsafe. That is, that even if we don't train AIs to achieve goals, they will be "deceptively aligned" anyways. This has important policy implications. Disclaimers: I am not putting forward a positive argument for alignment being easy. I am pointing out the invalidity of existing arguments, and explaining the implications of rolling back those updates. I am not saying "we don't know how deep learning works, so you can't prove it'll be bad." I'm saying "many arguments for deep learning -> doom are weak. I undid those updates and am now more optimistic." I am not covering training setups where we purposefully train an AI to be agentic and autonomous. I just think it's not plausible that we just keep scaling up networks, run pretraining + light RLHF, and then produce a schemer.[2] Tracing back historical arguments In the next section, I'll discuss the counting argument. In this one, I want to demonstrate how often foundational alignment texts make crucial errors. Nick Bostrom's Superintelligence, for example: A range of different methods can be used to solve "reinforcement-learning problems," but they typically involve creating a system that seeks to maximize a reward signal. This has an inherent tendency to produce the wireheading failure mode when the system becomes more intelligent. Reinforcement learning therefore looks unpromising. (p.253) To be blunt, this is nonsense. I have long meditated on the nature of "reward functions" during my PhD in RL theory. In the most useful and modern RL approaches, "reward" is a tool used to control the strength of parameter updates to the network.[3] It is simply not true that "[RL approaches] typically involve creating a system that seeks to maximize a reward signal." There is not a single case where we have used RL to train an artificial system which intentionally "seeks to maximize" reward.[4] Bostrom spends a few pages making this mistake at great length.[5] After making a false claim, Bostrom goes on to dismiss RL approaches to creating useful, intelligent, aligned systems. But, as a point of further fact, RL approaches constitute humanity's current best tools for aligning AI systems today! Those approaches are pretty awesome. No RLHF, then no GPT-4 (as we know it). In arguably the foundational technical AI alignment text, Bostrom makes a deeply confused and false claim, and then perfectly anti-predicts what alignment techniques are promising. I'm not trying to rag on Bostrom personally for making this mistake. Foundational texts, ahead of their time, are going to get some things wrong. But that doesn't save us from the subsequent errors which avalanche from this kind of early mistake. These deep errors have costs measured in tens of thousands of researcher-hours. Due to the "RL->reward maximizing" meme, I personally misdirected thousands of hours on proving power-se...
“My mouth hasn't shut up about you since you kissed it. The idea that you may kiss it again is stuck in my brain, which hasn't stopped thinking about you since, well, before any kiss.” So begins the infamous love letter written by Alex Turner to Alexa Chung in 2008. Somehow this letter made its way online (Tumblr) and into the hearts of teenage girls forever (Kelly included). The letter has become a symbol of their love story ever since, but where did it come from? Why was it on Tumblr? Was it even real?? This week we're revisiting the swoon-worthy romance between Alex Turner, lead singer and lyricist of Arctic Monkey and The Last Shadow Puppets, and Alexa Chung, the model/TV presenter/fashion designer/it girl. They were both English, stylish, and incredibly of their time. And nearly had the same name! Join us as we examine the lyrics, writings, photographs, and mementos of Alex & Alexa. ***** Are you or someone you know struggling with unrelenting, intrusive thoughts about relationships? That's relationship OCD. Learn more about relationship OCD and receive evidence-based treatment at NOCD.com. Significant Lovers is a true-love podcast about historic and celebrity couples. You can contact us at significantlovers@gmail.com and follow us on Instagram and TikTok @significantlovers. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for ‘fair use' for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use. --- Support this podcast: https://podcasters.spotify.com/pod/show/significantlovers/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dual Wielding Kindle Scribes, published by mesaoptimizer on February 21, 2024 on LessWrong. This is an informal post intended to describe a workflow / setup that I found very useful, so that others might consider adopting or experimenting with facets of it that they find useful. In August 2023, I was a part of MATS 4.0 and had begun learning the skill of deconfusion, with an aim of disentangling my conflicting intuitions between my belief that shard theory seemed to be at least directionally pointing at some issues with the MIRI model of AGI takeoff and alignment difficulty, and my belief that Nate Soares was obviously correct that reflection will break Alex Turner's diamond alignment scheme. A friend lent me his Kindle Scribe to try out as part of my workflow. I started using it for note-taking, and found it incredibly useful and bought it from him. A month later, I bought a second Kindle Scribe to add to my workflow. It has been about six months since, and I've sold both my Kindle Scribes. Here's why I found this workflow useful (and therefore why you might find it useful), and why I moved on from it. The Display The Kindle Scribe is a marvelous piece of hardware. With a 300 PPI e-ink 10.3 inch screen, reading books on it was a delight in comparison to any other device I've used to read content on. The stats I just mentioned matter: 300 PPI on a 10.3 inch display means the displayed text is incredibly crisp, almost indistinguishable from normal laptop and smartphone screens. This is not the case for most e-ink readers. E-ink screens seem to reduce eye strain by a non-trivial amount. I've looked into some studies, but the sample sizes and effect sizes were not enough to make me unilaterally recommend people switch to e-ink screens for reading. However, it does seem like the biggest benefit of using e-ink screens seems to be that you aren't staring into a display that is constantly shining light into your eyeballs, which is the equivalent of staring into a lightbulb. Anecdotally, it did seem like I was able to read and write for longer hours when I only used e-ink screens: I went from, about 8 to 10 hours a day (with some visceral eye fatigue symptoms like discomfort at the end of the day) to about 12 to 14 hours a day, without these symptoms, based on my informal tracking during September 2023. 10.3 inch screens (with a high PPI) just feel better to use in comparison to smaller (say, 6 to 7 inch screens) for reading. This seems to me to be due to a greater amount of text displayed on the screen at any given time, which seems to somehow limit the feeling of comprehensibility of the text. I assume this is somehow related to chunking of concepts in working memory, where if you have a part of a 'chunk' on one page, and another part on another page, you may have a subtle difficulty with comprehending what you are reading (if it is new to you), and the more the text you have in front of you, the more you can externalize the effort of comprehension. (I used a Kobo Libra 2 (7 inch e-ink screen) for a bit to compare how it felt to read on, to get this data.) Also, you can write notes in the Kindle Scribe. This was a big deal for me, since before this, I used to write notes on my laptop, and my laptop was a multi-purpose device. Sidenote: My current philosophy of note-taking is that I think 'on paper' using these notes, and don't usually refer to it later on. The aim is to augment my working memory with an external tool, and the way I write notes usually reflects this -- I either write down most of my relevant and conscious thoughts as I think them (organized as a sequence of trees, where each node is a string representing a 'thought'), or I usually write 'waypoints' for my thoughts, where each waypoint is a marker for a conclusion of a sequence / tree of thoughts, or an inte...
We had the pleasure of interviewing Reverend & The Makers over Zoom video!The Reverend's story is one of the great survival stories of the music industry as charisma, talent, defiance and sheer willpower. Jon is The Godfather to numerous northern bands coming through and was even labeled a guiding light to Arctic Monkeys frontman Alex Turner during their early years. Jeremy Corbyn has introduced them onto stage and is a firm friend (he was also at their sold out show at Islington Academy).With the recent release of their 7th studio album, Rev have fought back through adversity and are one of the great survivors of the British music scene. The bands recent single ‘Heatwave In The Cold North', a hazy, sun-drenched Barry White-inspired soul bop, has become their biggest hit in over a decade - Radio 2 A-list, Record of the Week and a top 40 airplay hit.We want to hear from you! Please email Hello@BringinitBackwards.com. www.BringinitBackwards.com#podcast #interview #bringinbackpod #ReverendandTheMakers #ReverendJonMcClure #JonMcClure #HeatwaveInTheColdNorth #NewMusic #ZoomListen & Subscribe to BiB https://www.bringinitbackwards.com/follow/ Follow our podcast on Instagram and Twitter! https://www.facebook.com/groups/bringinbackpodThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4972373/advertisement