POPULARITY
In this episode of “This Is Purdue,” we're talking to Alex Turner, Purdue alum and design engineer at Dallara. Alex is a 2022 graduate of Purdue's motorsports engineering program and has used his skills and experience to earn his dream job at Dallara's U.S. headquarters in Indianapolis, just steps away from the Indianapolis Motor Speedway — home of the Indy 500. In this episode you will: Learn about the motorsports engineering program at Purdue University in Indianapolis and the opportunities available to students through the new Dallara partnership Hear how his passion for IndyCar racing led him to the motorsports engineering program at Purdue University in Indianapolis Discover how Alex's journey as a student in Indianapolis and his industry internships helped him land his current role at Dallara Listen to exclusive stories from the IndyCar engineer, including his family ties to the Indy 500 and his favorite race-day memories of “The Greatest Spectacle in Racing” Find out about the innovation and collaboration that goes into being a Dallara design engineer, including what a typical day in his life looks like Learn about Dallara's rich history with IndyCar as the exclusive chassis provider for every car on the grid since 2008 You don't want to miss this special episode that takes you behind the scenes of the world's fastest racing.
This week on Rockonteurs, we are delighted to welcome Luke Pritchard from The Kooks to the show.The Kooks were formed in Brighton in the mid 00s and their 2006 debut album ‘Inside In, Inside Out' sold 2 million copies in the UK alone. Their follow up ‘Konk' was another UK No.1 album and they are enjoying a new resurgence as social media has made them a generations new favourite band again. Luke joins Gary and Guy to talk about his influences from Dylan to the Stones, how success so early on in their career was a double-edged sword, how he once kicked Alex Turner from the Arctic Monkey's in the face at a gig and embarrassed himself in front of a Beatle! Never / Know is the new album from The Kooks and is out on May 9th. Find out more about the album here: https://thekooks.lnk.to/NeverKnowAlbumSRInstagram @rockonteurs @thekooksmusic @guyprattofficial @garyjkemp @gimmesugarproductions Listen to the podcast and watch some of our latest episodes on our Rockonteurs YouTube channel.YouTube: https://www.youtube.com/@rockonteursFacebook: https://www.facebook.com/RockonteursTikTok: https://www.tiktok.com/@therockonteursProduced for WMG UK by Ben Jones at Gimme Sugar Productions Hosted on Acast. See acast.com/privacy for more information.
This week on Rockonteurs, we are delighted to welcome Luke Pritchard from The Kooks to the show.The Kooks were formed in Brighton in the mid 00s and their 2006 debut album ‘Inside In, Inside Out' sold 2 million copies in the UK alone. Their follow up ‘Konk' was another UK No.1 album and they are enjoying a new resurgence as social media has made them a generations new favourite band again. Luke joins Gary and Guy to talk about his influences from Dylan to the Stones, how success so early on in their career was a double-edged sword, how he once kicked Alex Turner from the Arctic Monkey's in the face at a gig and embarrassed himself in front of a Beatle! Never / Know is the new album from The Kooks and is out on May 9th. Find out more about the album here: https://thekooks.lnk.to/NeverKnowAlbumSRInstagram @rockonteurs @thekooksmusic @guyprattofficial @garyjkemp @gimmesugarproductions Listen to the podcast and watch some of our latest episodes on our Rockonteurs YouTube channel.YouTube: https://www.youtube.com/@rockonteursFacebook: https://www.facebook.com/RockonteursTikTok: https://www.tiktok.com/@therockonteursProduced for WMG UK by Ben Jones at Gimme Sugar Productions Hosted on Acast. See acast.com/privacy for more information.
Hoy, con motivo del aniversario del disco debut de los Last Shadow Puppets, The Age of the Understatement (publicado el 15 de abril de 2008), recuperamos un breve especial para de la banda paralela de Alex Turner y Miles Kane. También recordaros que ya podéis comprar La gran travesía del rock, un libro interactivo que además contará con 15 programas de radio complementarios, a modo de ficción sonora... con muchas sorpresas y voces conocidas... https://www.ivoox.com/gran-travesia-del-rock-capitulos-del-libro_bk_list_10998115_1.html Jimi y Janis, dos periodistas musicales, vienen de 2027, un mundo distópico y delirante donde el reguetón tiene (casi) todo el poder... pero ellos dos, deciden alistarse al GLP para viajar en el tiempo, salvar el rock, rescatar sus archivos ocultos y combatir la dictadura troyana del FPR. ✨ El libro ya está en diversas webs, en todostuslibros.com Amazon, Fnac y también en La Montaña Mágica, por ejemplo https://www.amazon.es/GRAN-TRAVES%C3%8DA-DEL-ROCK-autoestopista/dp/8419924938 ▶️ Y ya sabéis, si os gusta el programa y os apetece, podéis apoyarnos y colaborar con nosotros por el simple precio de una cerveza al mes, desde el botón azul de iVoox, y así, además podéis acceder a todo el archivo histórico exclusivo. Muchas gracias también a todos los mecenas y patrocinadores por vuestro apoyo: Poncho C, Don T, Francisco Quintana, Gastón Nicora, Con, Piri, Dotakon, Tete García, Jose Angel Tremiño, Marco Landeta Vacas, Oscar García Muñoz, Raquel Parrondo, Javier Gonzar, Eva Arenas, Poncho C, Nacho, Javito, Alberto, Pilar Escudero, Blas, Moy, Dani Pérez, Santi Oliva, Vicente DC,, Leticia, JBSabe, Flor, Melomanic, Arturo Soriano, Gemma Codina, Raquel Jiménez, Pedro, SGD, Raul Andres, Tomás Pérez, Pablo Pineda, Quim Goday, Enfermerator, María Arán, Joaquín, Horns Up, Victor Bravo, Fonune, Eulogiko, Francisco González, Marcos Paris, Vlado 74, Daniel A, Redneckman, Elliott SF, Guillermo Gutierrez, Sementalex, Miguel Angel Torres, Suibne, Javifer, Matías Ruiz Molina, Noyatan, Estefanía, Iván Menéndez, Niksisley y a los mecenas anónimos.
Die Arctic Monkeys sind die größte Rock'n'Roll-Band unserer Zeit – und das, obwohl sie seit über zehn Jahren keinen Rock'n'Roll mehr machen. Also zumindest auf Platte. Live dagegen füllen sie Arenen und Stadien, Sänger Alex Turner ist die Wiedergeburt des Brit-Rock für Nostalgiker und ein Rebel-Heartthrob für Teenagerinnen. Auf den letzten beiden Alben hat der charismatische Frontmann allerdings das Croonen entdeckt, sowie elegante Lounge- und Orchester-Musik der 70er Jahre, und damit auch den letzten Musik-Kritiker für sich gewonnen. Das war bereits der mindestens zweite radikale Kurswechsel, nachdem die Band aus Sheffield in den goldenen Nullerjahren des Indie eine der Go-To-Adressen war, jeden „Dancefloor“ erobert hat bis die Sonne wieder auf- oder das Licht anging. Dann ging es auf die Rancho de la Luna zu Josh Homme von den Queens of the Stone Age, mit ihm entdeckten sie den Sound der kalifornischen Wüste. Was langfristig im bis heute erfolgreichsten Album mündete, ihrem vor Riffs, Verzerrern und Lovesongs nur so strotzendem Signature-Album „AM“. „Do I Wanna Know... all about the Arctic Monkeys? Wer diese Frage mit „Hell Yeah!“ beantwortet, ist bei dieser Folge genau richtig. In Episode #103ArcticMonkeys kommt mit Philipp Kressmann ein absoluter Indie-Experte und Fan der ersten Stunde vorbei. Alex Turner, Jamie Cook und Nick O'Malley aus der Band erzählen von ihren großen Alben. Jetzt überall, wo es Podcasts gibt.
Top 51s, Tenuous Spireite in Last One Laughing, A BRAND NEW SONG, Osman Kakay's House of Games, featurning Alex Turner presenting SINGONYMS and a middling Whelan Fortuné.
Miles Kane spills the beans on his musical inspiration, stories from his career, his love for Alex Turner and Roberto Baggio & performs live in the caff!New episodes out weekly, subscribe for more!Produced by Face For Radio Media.
British comedian Sean McLoughlin was recently in Lisbon to open for Ricky Gervais, in the “Mortality” tour. He had already performed at a sold out MEO Arena in 2023, when the creator of “The Office” took to the stage in Portugal for the first time, during the “Armageddon” tour. Sean McLoughlin has also performed his own solo show in Portugal and will be back in june, with “White Elephant”. In Lisbon, on the 23rd and 24th of june, and in Porto, on the 22nd of november. In an interview with Gustavo Carvalho, for “Humor À Primeira Vista”, he explains how he became Ricky Gervais' opener, presents a theory that justifies the success of stand-up comedy around the world, reveals the behind the scenes of a performance in Los Angeles, for 17.000 people, that felt more like an Apollo mission and reveals that, despite not living in Portugal, he is not that far from it. This interview was conducted in english. Two versions are available: one original, in english; one translated to portuguese. This is the english version. You can listen to the portuguese version hereSee omnystudio.com/listener for privacy information.
The midierror meets... interview series is back with Sonicstate, speaking to all kinds of people working in music and sound. For this episode we hear from Matt Cox of Gravity Rigs - who has been the Chemical Brothers go-to MIDI & Keyboard tech for over 30 years, still touring with them since the mid-90s! He's also worked with The Prodigy, Hot Chip, Disclosure, Bicep, Eric Prydz, Pendulum, The Pet Shop Boys, New Order, Orbital and many more - ensuring they have rock-solid, bullet-proof live performance systems. Gravity Rigs was co-founded with Alex Turner; together they design and build rigs for the biggest artists in the world, many of which are detailed on their website. https://gravityrigs.com/ This is series 2, episode 1 and there are 50 previous episodes available now featuring Fatboy Slim, CJ Bolland, Andrew Huang, Tim Exile, High Contrast, Mylar Melodies, Infected Mushroom, DJ Rap, John Grant and many more. Available on Soundcloud, Spotify, Apple Music and Bandcamp.
The Money Trench - The Music Industry Podcast with Mark Sutherland
Welcome to The Money Trench. In this episode, Mark catches up with the lead singer of The Kooks, Luke Pritchard. Throughout their chat, the pair discuss the latest in the music industry, including the TikTok ban and what it means for artists, the impact of AI, and more on the Los Angeles Wildfires. Luke also talks through some of the difficulties facing young artists today and what the industry needs to do to support them, as well as sharing stories of his journey to success - including working on the band's legendary debut album, and kicking Alex Turner of The Arctic Monkeys in the face! NEWSLETTER Sign up HERE for the TMT newsletter - featuring each week's hottest music industry stories. PPL The Money Trench is sponsored by the PPL. KEEP UP TO DATE For the latest podcast and music business updates, make sure to follow us on: Instagram: @the_money_trench LinkedIn: The Money Trench Website: The Money Trench GET IN TOUCH If you have any feedback, guest suggestions or general comments? We'd love to hear from you! - Get in touch here! Thanks to our partners PPL Earth/Percent Tom A Smith Aimless Play Fourth Pillar Sennheiser Junkhead Studio Tape Notes Executive Producer: Mike Walsh Producer: Tape Notes
Alex Turner of the Animal and Plant Health Inspection Service explains efforts over the past year to monitor and mitigate a strain of H5N1 Highly Pathogenic Avian Flu found in milk samples in dairy cattle.See omnystudio.com/listener for privacy information.
Deep Cuts grabs the “people's mic” in this episode about 2011. Tune in and find out which host thinks he's bringing sexy back with his pick and which host loves sea shanties sung by baristas who were liberal arts majors. Featuring tracks by Class Actress, Alex Turner, Smith Westerns, Duke Spirit and Cage the Elephant. Learn more about your ad choices. Visit megaphone.fm/adchoices
In de Daily KINK hoor je elke avond om 20:20 het belangrijkste muzieknieuws en de nieuwste releases in KINK IN TOUCH. Ook als podcast. Met vandaag nieuws over: 📌Korn📌The Cure📌Arctic Monkeys📌Vampire Weekend
Today, Polish Club frontman Novak joins host Jeremy Dylan to discuss the Arctic Monkey's divisive cult classic album ‘Tranquility Base Hotel and Casino', the sci-fi concept album that followed up the rock'n'roll behemoth of AM. Jeremy and Novak reminisce about their days as office-mates, Novak coming out as a singer at karaoke, ageing in rock'n'roll, why so many artists both love and envy this album, the artistic bravery of following their biggest commercial hit with a ‘jazzy concept album about eating pizza on the moon', the alternate reality where this was an Alex Turner solo album, how swerving musically helps sustain a long career and more. Listen to the new Polish Club album 'Heavy Weight Heart', out now!
In this week's episode of TWTW, we are transported back to the first decade of the new millennium, when the IT girl of the era hitched her wagon to the frontman de jour, and we were gifted with Alexa Chung and Alex Turner stepping out at Glasto and gracing the gossip pages. Comedian Amy Matthews is to thank for finally allowing us a good old dissecting of this much lamented king and queen of indie culture, so get your ballet pumps and skinny jeans on, and we'll see you down at Bungalow 8. Come and see TWTW LIVE at this year's Cheerful Earful festival in London on October 19th 2024! Full info and tickets can be found HERE! See you there XX Learn more about your ad choices. Visit megaphone.fm/adchoices
"Das Licht, der Herbst, die Tiefe." Die eigene Beschreibung von Saguru scheint nicht gerade zum Sommer zu passen. Tatsächlich singt er auch von Schneestürmen in schwarz-weiß, aber vor allem von der Liebe, und die richtet sich nun mal nicht nach Jahreszeiten. Mit leicht melancholischem Ton sind also auch Sagurus vertonte Emotionen universell und das ganze Jahr über passend. Saguru, das ist Chris Rappel aus München, beeinflusst von Vorbildern wie Alex Turner oder Bon Iver. Seine nächsteEP steht bereits in den Startlöchern und von dieser hat er gerade den zweiten Song veröffentlicht. Untermalt von einem elektronisch-sanften Mix aus weichen, geradezu verschwindenden Gitarren und verzerrten gedämpften Synthesizern, geht es in "True Love" um das Privileg der wahren Liebe; bei der man schnell alles andere um sich vergessen kann – wie den aktuellen Monat.
"Das Licht, der Herbst, die Tiefe." Die eigene Beschreibung von Saguru scheint nicht gerade zum Sommer zu passen. Tatsächlich singt er auch von Schneestürmen in schwarz-weiß, aber vor allem von der Liebe, und die richtet sich nun mal nicht nach Jahreszeiten. Mit leicht melancholischem Ton sind also auch Sagurus vertonte Emotionen universell und das ganze Jahr über passend. Saguru, das ist Chris Rappel aus München, beeinflusst von Vorbildern wie Alex Turner oder Bon Iver. Seine nächsteEP steht bereits in den Startlöchern und von dieser hat er gerade den zweiten Song veröffentlicht. Untermalt von einem elektronisch-sanften Mix aus weichen, geradezu verschwindenden Gitarren und verzerrten gedämpften Synthesizern, geht es in "True Love" um das Privileg der wahren Liebe; bei der man schnell alles andere um sich vergessen kann – wie den aktuellen Monat.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brief notes on the Wikipedia game, published by Olli Järviniemi on July 14, 2024 on LessWrong. Alex Turner introduced an exercise to test subjects' ability to notice falsehoods: change factual statements in Wikipedia articles, hand the edited articles to subjects and see whether they notice the modifications. I've spent a few hours making such modifications and testing the articles on my friend group. You can find the articles here. I describe my observations and thoughts below. The bottom line: it is hard to come up with good modifications / articles to modify, and this is the biggest crux for me. The concept Alex Turner explains the idea well here. The post is short, so I'm just copying it here: Rationality exercise: Take a set of Wikipedia articles on topics which trainees are somewhat familiar with, and then randomly select a small number of claims to negate (negating the immediate context as well, so that you can't just syntactically discover which claims were negated). For example: "By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception." > modified to "Contrary to early theories, newborn infants are not particularly adept at picking out their mother's voice from other voices. This suggests the absence of prenatal development of auditory perception." Sometimes, trainees will be given a totally unmodified article. For brevity, the articles can be trimmed of irrelevant sections. Benefits: Addressing key rationality skills. Noticing confusion; being more confused by fiction than fact; actually checking claims against your models of the world. If you fail, either the article wasn't negated skillfully ("5 people died in 2021" -> "4 people died in 2021" is not the right kind of modification), you don't have good models of the domain, or you didn't pay enough attention to your confusion. Either of the last two are good to learn. Features of good modifications What does a good modification look like? Let's start by exploring some failure modes. Consider the following modifications: "World War II or the Second World War (1 September 1939 - 2 September 1945) was..." -> "World War II or the Second World War (31 August 1939 - 2 September 1945) was... "In the wake of Axis defeat, Germany, Austria, Japan and Korea were occupied" -> "In the wake of Allies defeat, United States, France and Great Britain were occupied" "Operation Barbarossa was the invasion of the Soviet Union by..." -> "Operation Bergenstein was the invasion of the Soviet Union by..." Needless to say, these are obviously poor changes for more than one reason. Doing something which is not that, one gets at least the following desiderata for a good change: The modifications shouldn't be too obvious nor too subtle; both failure and success should be realistic outcomes. The modification should have implications, rather than being an isolated fact, test of memorization or a mere change of labels. The "intended solution" is based on general understanding of a topic, rather than memorization. The change "The world population is 8 billion" "The world population is 800,000" definitely has implications, and you could indirectly infer that the claim is false, but in practice people would think "I've previously read that the world population is 8 billion. This article gives a different number. This article is wrong." Thus, this is a bad change. Finally, let me add: The topic is of general interest and importance. While the focus is on general rationality skills rather than object-level information, I think you get better examples by having interesting and important topics, rather than something obscure. Informally, an excellent modification is such that it'd just be very silly to actually believe the false claim made, in t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How ARENA course material gets made, published by CallumMcDougall on July 3, 2024 on LessWrong. TL;DR In this post, I describe my methodology for building new material for ARENA. I'll mostly be referring to the exercises on IOI, Superposition and Function Vectors as case studies. I expect this to be useful for people who are interested in designing material for ARENA or ARENA-like courses, as well as people who are interested in pedagogy or ML paper replications. The process has 3 steps: 1. Start with something concrete 2. First pass: replicate, and understand 3. Second pass: exercise-ify Summary I'm mostly basing this on the following 3 sets of exercises: Indirect Object Identification - these exercises focus on the IOI paper (from Conmy et al). The goal is to have people understand what exploratory analysis of transformers looks like, and introduce the key ideas of the circuits agenda. Superposition & SAEs - these exercises focus on understanding superposition and the agenda of dictionary learning (specifically sparse autoencoders). Most of the exercises explore Anthropic's Toy Models of Superposition paper, except for the last 2 sections which explore sparse autoencoders (firstly by applying them to the toy model setup, secondly by exploring a sparse autoencoder trained on a language model). Function Vectors - these exercises focus on the Function Vectors paper by David Bau et al, although they also make connections with related work such as Alex Turner's GPT2-XL steering vector work. These exercises were interesting because they also had the secondary goal of being an introduction to the nnsight library, in much the same way that the intro to mech interp exercises were also an introduction to TransformerLens. The steps I go through are listed below. I'm indexing from zero because I'm a software engineer so of course I am. The steps assume you already have an idea of what exercises you want to create; in Appendix (1) you can read some thoughts on what makes for a good exercise set. 1. Start with something concrete When creating material, you don't want to be starting from scratch. It's useful to have source code available to browse - bonus points if that takes the form of a Colab or something which is self-contained and has easily visible output. IOI - this was Neel's "Exploratory Analysis Demo" exercises. The rest of the exercises came from replicating the paper directly. Superposition - this was Anthroic's Colab notebook (although the final version went quite far beyond this). The very last section (SAEs on transformers) was based on Neel Nanda's demo Colab). Function Vectors - I started with the NDIF demo notebook, to show how some basic nnsight syntax worked. As for replicating the actual function vectors paper, unlike the other 2 examples I was mostly just working from the paper directly. It helped that I was collaborating with some of this paper's authors, so I was able to ask them some questions to clarify aspects of the paper. 2. First-pass: replicate, and understand The first thing I'd done in each of these cases was go through the material I started with, and make sure I understood what was going on. Paper replication is a deep enough topic for its own series of blog posts (many already exist), although I'll emphasise that I'm not usually talking about full paper replication here, because ideally you'll be starting from something a it further along, be that a Colab, a different tutorial, or something else. And even when you are just working directly from a paper, you shouldn't make the replication any harder for yourself than you need to. If there's code you can take from somewhere else, then do. My replication usually takes the form of working through a notebook in VSCode. I'll either start from scratch, or from a downloaded Colab if I'm using one as a ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shard Theory - is it true for humans?, published by Rishika Bose on June 14, 2024 on The AI Alignment Forum. And is it a good model for value learning in AI? (Read on Substack: https://recursingreflections.substack.com/p/shard-theory-is-it-true-for-humans) TLDR Shard theory proposes a view of value formation where experiences lead to the creation of context-based 'shards' that determine behaviour. Here, we go over psychological and neuroscientific views of learning, and find that while shard theory's emphasis on context bears similarity to types of learning such as conditioning, it does not address top-down influences that may decrease the locality of value-learning in the brain. What's Shard Theory (and why do we care)? In 2022, Quintin Pope and Alex Turner posted ' The shard theory of human values', where they described their view of how experiences shape the value we place on things. They give an example of a baby who enjoys drinking juice, and eventually learns that grabbing at the juice pouch, moving around to find the juice pouch, and modelling where the juice pouch might be, are all helpful steps in order to get to its reward. 'Human values', they say, 'are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics…' And since, like humans, AI is often trained with reinforcement learning, the same might apply to AI. The original post is long (over 7,000 words) and dense, but Lawrence Chan helpfully posted a condensation of the topic in ' Shard Theory in Nine Theses: a Distillation and Critical Appraisal'. In it, he presents nine (as might be expected) main points of shard theory, ending with the last thesis: 'shard theory as a model of human values'. 'I'm personally not super well versed in neuroscience or psychology', he says, 'so I can't personally attest to [its] solidity…I'd be interested in hearing from experts in these fields on this topic.' And that's exactly what we're here to do. A Crash Course on Human Learning Types of learning What is learning? A baby comes into the world and is inundated with sensory information of all kinds. From then on, it must process this information, take whatever's useful, and store it somehow for future use. There's various places in the brain where this information is stored, and for various purposes. Looking at these various types of storage, or memory, can help us understand what's going on: 3 types of memory We often group memory types by the length of time we hold on to them - 'working memory' (while you do some task), 'short-term memory' (maybe a few days, unless you revise or are reminded), and 'long-term memory' (effectively forever). Let's take a closer look at long-term memory: Types of long-term memory We can broadly split long-term memory into 'declarative' and 'nondeclarative'. Declarative memory is stuff you can talk about (or 'declare'): what the capital of your country is, what you ate for lunch yesterday, what made you read this essay. Nondeclarative covers the rest: a grab-bag of memory types including knowing how to ride a bike, getting habituated to a scent you've been smelling all day, and being motivated to do things you were previously rewarded for (like drinking sweet juice). For most of this essay, we'll be focusing on the last type: conditioning. Types of conditioning Conditioning Sometime in the 1890s, a physiologist named Ivan Pavlov was researching salivation using dogs. He would feed the dogs with powdered meat, and insert a tube into the cheek of each dog to measure their saliva.As expected, the dogs salivated when the food was in front of them. Unexpectedly, the dogs also salivated when they heard the footsteps of his assistant (who brought them their food). Fascinated by this, Pavlov started to play a metronome whenever he ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shard Theory - is it true for humans?, published by ErisApprentice on June 14, 2024 on The AI Alignment Forum. And is it a good model for value learning in AI? (Read on Substack: https://recursingreflections.substack.com/p/shard-theory-is-it-true-for-humans) TLDR Shard theory proposes a view of value formation where experiences lead to the creation of context-based 'shards' that determine behaviour. Here, we go over psychological and neuroscientific views of learning, and find that while shard theory's emphasis on context bears similarity to types of learning such as conditioning, it does not address top-down influences that may decrease the locality of value-learning in the brain. What's Shard Theory (and why do we care)? In 2022, Quintin Pope and Alex Turner posted ' The shard theory of human values', where they described their view of how experiences shape the value we place on things. They give an example of a baby who enjoys drinking juice, and eventually learns that grabbing at the juice pouch, moving around to find the juice pouch, and modelling where the juice pouch might be, are all helpful steps in order to get to its reward. 'Human values', they say, 'are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics…' And since, like humans, AI is often trained with reinforcement learning, the same might apply to AI. The original post is long (over 7,000 words) and dense, but Lawrence Chan helpfully posted a condensation of the topic in ' Shard Theory in Nine Theses: a Distillation and Critical Appraisal'. In it, he presents nine (as might be expected) main points of shard theory, ending with the last thesis: 'shard theory as a model of human values'. 'I'm personally not super well versed in neuroscience or psychology', he says, 'so I can't personally attest to [its] solidity…I'd be interested in hearing from experts in these fields on this topic.' And that's exactly what we're here to do. A Crash Course on Human Learning Types of learning What is learning? A baby comes into the world and is inundated with sensory information of all kinds. From then on, it must process this information, take whatever's useful, and store it somehow for future use. There's various places in the brain where this information is stored, and for various purposes. Looking at these various types of storage, or memory, can help us understand what's going on: 3 types of memory We often group memory types by the length of time we hold on to them - 'working memory' (while you do some task), 'short-term memory' (maybe a few days, unless you revise or are reminded), and 'long-term memory' (effectively forever). Let's take a closer look at long-term memory: Types of long-term memory We can broadly split long-term memory into 'declarative' and 'nondeclarative'. Declarative memory is stuff you can talk about (or 'declare'): what the capital of your country is, what you ate for lunch yesterday, what made you read this essay. Nondeclarative covers the rest: a grab-bag of memory types including knowing how to ride a bike, getting habituated to a scent you've been smelling all day, and being motivated to do things you were previously rewarded for (like drinking sweet juice). For most of this essay, we'll be focusing on the last type: conditioning. Types of conditioning Conditioning Sometime in the 1890s, a physiologist named Ivan Pavlov was researching salivation using dogs. He would feed the dogs with powdered meat, and insert a tube into the cheek of each dog to measure their saliva.As expected, the dogs salivated when the food was in front of them. Unexpectedly, the dogs also salivated when they heard the footsteps of his assistant (who brought them their food). Fascinated by this, Pavlov started to play a metronome whenever h...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shard Theory - is it true for humans?, published by Rishika on June 14, 2024 on LessWrong. And is it a good model for value learning in AI? TLDR Shard theory proposes a view of value formation where experiences lead to the creation of context-based 'shards' that determine behaviour. Here, we go over psychological and neuroscientific views of learning, and find that while shard theory's emphasis on context bears similarity to types of learning such as conditioning, it does not address top-down influences that may decrease the locality of value-learning in the brain. What's Shard Theory (and why do we care)? In 2022, Quintin Pope and Alex Turner posted ' The shard theory of human values', where they described their view of how experiences shape the value we place on things. They give an example of a baby who enjoys drinking juice, and eventually learns that grabbing at the juice pouch, moving around to find the juice pouch, and modelling where the juice pouch might be, are all helpful steps in order to get to its reward. 'Human values', they say, 'are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics…' And since, like humans, AI is often trained with reinforcement learning, the same might apply to AI. The original post is long (over 7,000 words) and dense, but Lawrence Chan helpfully posted a condensation of the topic in ' Shard Theory in Nine Theses: a Distillation and Critical Appraisal'. In it, he presents nine (as might be expected) main points of shard theory, ending with the last thesis: 'shard theory as a model of human values'. 'I'm personally not super well versed in neuroscience or psychology', he says, 'so I can't personally attest to [its] solidity…I'd be interested in hearing from experts in these fields on this topic.' And that's exactly what we're here to do. A Crash Course on Human Learning Types of learning What is learning? A baby comes into the world and is inundated with sensory information of all kinds. From then on, it must process this information, take whatever's useful, and store it somehow for future use. There's various places in the brain where this information is stored, and for various purposes. Looking at these various types of storage, or memory, can help us understand what's going on: 3 types of memory We often group memory types by the length of time we hold on to them - 'working memory' (while you do some task), 'short-term memory' (maybe a few days, unless you revise or are reminded), and 'long-term memory' (effectively forever). Let's take a closer look at long-term memory: Types of long-term memory We can broadly split long-term memory into 'declarative' and 'nondeclarative'. Declarative memory is stuff you can talk about (or 'declare'): what the capital of your country is, what you ate for lunch yesterday, what made you read this essay. Nondeclarative covers the rest: a grab-bag of memory types including knowing how to ride a bike, getting habituated to a scent you've been smelling all day, and being motivated to do things you were previously rewarded for (like drinking sweet juice). For most of this essay, we'll be focusing on the last type: conditioning. Types of conditioning Conditioning Sometime in the 1890s, a physiologist named Ivan Pavlov was researching salivation using dogs. He would feed the dogs with powdered meat, and insert a tube into the cheek of each dog to measure their saliva.As expected, the dogs salivated when the food was in front of them. Unexpectedly, the dogs also salivated when they heard the footsteps of his assistant (who brought them their food). Fascinated by this, Pavlov started to play a metronome whenever he gave the dogs their food. After a while, sure enough, the dogs would salivate whenever the metronome played, even if ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Needs of Technical AI Safety Teams, published by Ryan Kidd on May 24, 2024 on The Effective Altruism Forum. Co-Authors: @yams @Carson Jones, @McKennaFitzgerald, @Ryan Kidd MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety orgs. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2] In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field. All interviews were conducted on the condition of anonymity. Needs by Organization Type Organization type Talent needs Scaling Lab (e.g., Anthropic, Google DeepMind, OpenAI) Safety Teams Iterators > Amplifiers Small Technical Safety Orgs ( Machine Learning (ML) Engineers Growing Technical Safety Orgs (10-30 FTE) Amplifiers > Iterators Independent Research Iterators > Connectors Here, ">" means "are prioritized over." Archetypes We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren't as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn't possible. We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities. Connectors / Iterators / Amplifiers Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding. Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds. Pure Connectors often have a long lead time before they're able to produce impactful work, since it's usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors. Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Needs in Technical AI Safety, published by yams on May 24, 2024 on LessWrong. Co-Authors: @yams, @Carson Jones, @McKennaFitzgerald, @Ryan Kidd MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety teams. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2] In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field. All interviews were conducted on the condition of anonymity. Needs by Organization Type Organization type Talent needs Scaling Lab (i.e. OpenAI, DeepMind, Anthropic) Safety Teams Iterators > Amplifiers Small Technical Safety Orgs ( Machine Learning (ML) Engineers Growing Technical Safety Orgs (10-30 FTE) Amplifiers > Iterators Independent Research Iterators > Connectors Archetypes We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren't as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn't possible. We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities. Connectors / Iterators / Amplifiers Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding. Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds. Pure Connectors often have a long lead time before they're able to produce impactful work, since it's usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors. Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional time has empowered him to make major contributions to a wide range of empir...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Big Picture AI Safety: Introduction, published by EuanMcLean on May 23, 2024 on LessWrong. tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held "traditional" views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer. What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many ideas and opinions are not written down anywhere, they exist only in people's heads and in lunchtime conversations at AI labs and coworking spaces. I set out to learn what the AI safety community believes about the strategic landscape of AI safety. I conducted 17 semi-structured interviews with a range of AI safety experts. I avoided going into any details of particular technical concepts or philosophical arguments, instead focussing on how such concepts and arguments fit into the big picture of what AI safety is trying to achieve. This work is similar to the AI Impacts surveys, Vael Gates' AI Risk Discussions, and Rob Bensinger's existential risk from AI survey. This is different to those projects in that both my approach to interviews and analysis are more qualitative. Part of the hope for this project was that it can hit on harder-to-quantify concepts that are too ill-defined or intuition-based to fit in the format of previous survey work. Questions I asked the participants a standardized list of questions. What will happen? Q1 Will there be a human-level AI? What is your modal guess of what the first human-level AI (HLAI) will look like? I define HLAI as an AI system that can carry out roughly 100% of economically valuable cognitive tasks more cheaply than a human. Q1a What's your 60% or 90% confidence interval for the date of the first HLAI? Q2 Could AI bring about an existential catastrophe? If so, what is the most likely way this could happen? Q2a What's your best guess at the probability of such a catastrophe? What should we do? Q3 Imagine a world where, absent any effort from the AI safety community, an existential catastrophe happens, but actions taken by the AI safety community prevent such a catastrophe. In this world, what did we do to prevent the catastrophe? Q4 What research direction (or other activity) do you think will reduce existential risk the most, and what is its theory of change? Could this backfire in some way? What mistakes have been made? Q5 Are there any big mistakes the AI safety community has made in the past or are currently making? These questions changed gradually as the interviews went on (given feedback from participants), and I didn't always ask the questions exactly as I've presented them here. I asked participants to answer from their internal model of the world as much as possible and to avoid deferring to the opinions of others (their inside view so to speak). Participants Adam Gleave is the CEO and co-founder of the alignment research non-profit FAR AI. (Sept 23) Adrià Garriga-Alonso is a research scientist at FAR AI. (Oct 23) Ajeya Cotra leads Open Philantropy's grantmaking on technical research that could help to clarify and reduce catastrophic risks from advanced AI. (Jan 24) Alex Turner is a research scientist at Google DeepMind on the Scalable Alignment team. (Feb 24) Ben Cottie...
Arctic Monkeys' debut album, "Whatever People Say I Am, That's What I'm Not," released in 2006, is a raw and energetic portrayal of youth culture and nightlife in Sheffield, England. The album bursts with frenetic guitar riffs, punchy rhythms, and Alex Turner's sharp, observational lyrics, capturing the exhilaration and chaos of being young and restless. Tracks like "I Bet You Look Good on the Dancefloor" and "When the Sun Goes Down" became instant anthems, showcasing the band's knack for crafting infectious indie rock tunes that resonate with both rebellious spirit and introspective depth.Listen to the album: SpotifyApple MusicLinks:Official websiteContactSupport us on PatreonDISCLAIMER: Due to copyright restrictions, we are unable to play pieces of the songs we cover in these episodes. Playing clips of songs are unfortunately prohibitively expensive to obtain the proper licensing. We strongly encourage you to listen to the album along with us on your preferred format to enhance the listening experience.
Two of the youngest artists at Sound City, Yee Loi and Alex Turner, and Grace from gothic shredders of the north VENUS GRRRLS speaks about their youth music organisation. Become a member of Rough Trade Club New Music, and you'll receive 1/3 off Rough Trade's Album of the Month on exclusive varient. Head to http://roughtrade.com/club and use 'CLUB101POD' as your voucher. DistroKid makes music distribution fun and easy with unlimited uploads and artists keeping the ENTIRETY of their revenue. Get 30% off the first year of their service by signing up at https://distrokid.com/vip/101pod Get £50 off your weekend ticket to 2000 Trees festival: where The Gaslight Anthem, The Chats, Hot Mulligan and TONS of excellent bands are playing. Use 101POD at checkout: 2000trees.co.uk Learn more about your ad choices. Visit megaphone.fm/adchoices
Esta sesión se convierten en la antesala del Warm Up 2024, festival que se celebra en Murcia este viernes y sábado, 3 y 4 de mayo, y que puedes seguir a través de Radio 3, desde las 21h, hoy y mañana, con los comentarios de Julio Ródenas y Constan Sotoca. VIVA SUECIA – La OrillaVEINTIUNO – La ToscanaSIDONIE – No Salgo MásGINEBRAS – Alex TurnerDELAPORTE – Me La PeguéBOMBA ESTÉREO – FuegoJUDELINE – MangataSLEAFORD MODS ft BILLY NOMATES - Mork n MindyJOHNNY MARR – Easy MoneyARDE BOGOTÁ – Los PerrosMUJERES – No Puedo MásSEN SENRA – Meu AmoreLA LA LOVE YOU ft SAMURAï – El Principio de Algo (Innmir & Wisemen Project Remix)EDITORS – Karma ClimbBLACK LIPS – Make You MinePERRO – Gracias, de NadaCUPIDO - SantaEscuchar audio
'Pretty Visitors' showcases Matthew J Helders the Third at the height of his drumming mastery. The agile beast behind the kit takes centre stage, commanding attention with his incredible Drum Fills. Join us as we dissect the performance, exploring the nuances that make 'Pretty Visitors' a testament to his status as one of the finest drummers of his generation .The track also features one of Alex Turner's most memorable quips, injecting a dose of lyrical wit into the frenetic energy of the song. Don't Believe The Hype is written and produced by Nick Lee and Dan Holt. Sign up for our Patreon here: https://patreon.com/arcticpodcast Find all our links here: https://linktr.ee/arcticmonkeyspodcast Get in touch with the show at arcticmonkeyspodcast@gmail.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Many arguments for AI x-risk are wrong, published by Alex Turner on March 5, 2024 on The AI Alignment Forum. The following is a lightly edited version of a memo I wrote for a retreat. It was inspired by a draft of Counting arguments provide no evidence for AI doom. I think that my post covers important points not made by the published version of that post. I'm also thankful for the dozens of interesting conversations and comments at the retreat. I think that the AI alignment field is partially founded on fundamentally confused ideas. I'm worried about this because, right now, a range of lobbyists and concerned activists and researchers are in Washington making policy asks. Some of these policy proposals seem to be based on erroneous or unsound arguments.[1] The most important takeaway from this essay is that the (prominent) counting arguments for "deceptively aligned" or "scheming" AI provide ~0 evidence that pretraining + RLHF will eventually become intrinsically unsafe. That is, that even if we don't train AIs to achieve goals, they will be "deceptively aligned" anyways. This has important policy implications. Disclaimers: I am not putting forward a positive argument for alignment being easy. I am pointing out the invalidity of existing arguments, and explaining the implications of rolling back those updates. I am not saying "we don't know how deep learning works, so you can't prove it'll be bad." I'm saying "many arguments for deep learning -> doom are weak. I undid those updates and am now more optimistic." I am not covering training setups where we purposefully train an AI to be agentic and autonomous. I just think it's not plausible that we just keep scaling up networks, run pretraining + light RLHF, and then produce a schemer.[2] Tracing back historical arguments In the next section, I'll discuss the counting argument. In this one, I want to demonstrate how often foundational alignment texts make crucial errors. Nick Bostrom's Superintelligence, for example: A range of different methods can be used to solve "reinforcement-learning problems," but they typically involve creating a system that seeks to maximize a reward signal. This has an inherent tendency to produce the wireheading failure mode when the system becomes more intelligent. Reinforcement learning therefore looks unpromising. (p.253) To be blunt, this is nonsense. I have long meditated on the nature of "reward functions" during my PhD in RL theory. In the most useful and modern RL approaches, "reward" is a tool used to control the strength of parameter updates to the network.[3] It is simply not true that "[RL approaches] typically involve creating a system that seeks to maximize a reward signal." There is not a single case where we have used RL to train an artificial system which intentionally "seeks to maximize" reward.[4] Bostrom spends a few pages making this mistake at great length.[5] After making a false claim, Bostrom goes on to dismiss RL approaches to creating useful, intelligent, aligned systems. But, as a point of further fact, RL approaches constitute humanity's current best tools for aligning AI systems today! Those approaches are pretty awesome. No RLHF, then no GPT-4 (as we know it). In arguably the foundational technical AI alignment text, Bostrom makes a deeply confused and false claim, and then perfectly anti-predicts what alignment techniques are promising. I'm not trying to rag on Bostrom personally for making this mistake. Foundational texts, ahead of their time, are going to get some things wrong. But that doesn't save us from the subsequent errors which avalanche from this kind of early mistake. These deep errors have costs measured in tens of thousands of researcher-hours. Due to the "RL->reward maximizing" meme, I personally misdirected thousands of hours on proving power-se...
“My mouth hasn't shut up about you since you kissed it. The idea that you may kiss it again is stuck in my brain, which hasn't stopped thinking about you since, well, before any kiss.” So begins the infamous love letter written by Alex Turner to Alexa Chung in 2008. Somehow this letter made its way online (Tumblr) and into the hearts of teenage girls forever (Kelly included). The letter has become a symbol of their love story ever since, but where did it come from? Why was it on Tumblr? Was it even real?? This week we're revisiting the swoon-worthy romance between Alex Turner, lead singer and lyricist of Arctic Monkey and The Last Shadow Puppets, and Alexa Chung, the model/TV presenter/fashion designer/it girl. They were both English, stylish, and incredibly of their time. And nearly had the same name! Join us as we examine the lyrics, writings, photographs, and mementos of Alex & Alexa. ***** Are you or someone you know struggling with unrelenting, intrusive thoughts about relationships? That's relationship OCD. Learn more about relationship OCD and receive evidence-based treatment at NOCD.com. Significant Lovers is a true-love podcast about historic and celebrity couples. You can contact us at significantlovers@gmail.com and follow us on Instagram and TikTok @significantlovers. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for ‘fair use' for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use. --- Support this podcast: https://podcasters.spotify.com/pod/show/significantlovers/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dual Wielding Kindle Scribes, published by mesaoptimizer on February 21, 2024 on LessWrong. This is an informal post intended to describe a workflow / setup that I found very useful, so that others might consider adopting or experimenting with facets of it that they find useful. In August 2023, I was a part of MATS 4.0 and had begun learning the skill of deconfusion, with an aim of disentangling my conflicting intuitions between my belief that shard theory seemed to be at least directionally pointing at some issues with the MIRI model of AGI takeoff and alignment difficulty, and my belief that Nate Soares was obviously correct that reflection will break Alex Turner's diamond alignment scheme. A friend lent me his Kindle Scribe to try out as part of my workflow. I started using it for note-taking, and found it incredibly useful and bought it from him. A month later, I bought a second Kindle Scribe to add to my workflow. It has been about six months since, and I've sold both my Kindle Scribes. Here's why I found this workflow useful (and therefore why you might find it useful), and why I moved on from it. The Display The Kindle Scribe is a marvelous piece of hardware. With a 300 PPI e-ink 10.3 inch screen, reading books on it was a delight in comparison to any other device I've used to read content on. The stats I just mentioned matter: 300 PPI on a 10.3 inch display means the displayed text is incredibly crisp, almost indistinguishable from normal laptop and smartphone screens. This is not the case for most e-ink readers. E-ink screens seem to reduce eye strain by a non-trivial amount. I've looked into some studies, but the sample sizes and effect sizes were not enough to make me unilaterally recommend people switch to e-ink screens for reading. However, it does seem like the biggest benefit of using e-ink screens seems to be that you aren't staring into a display that is constantly shining light into your eyeballs, which is the equivalent of staring into a lightbulb. Anecdotally, it did seem like I was able to read and write for longer hours when I only used e-ink screens: I went from, about 8 to 10 hours a day (with some visceral eye fatigue symptoms like discomfort at the end of the day) to about 12 to 14 hours a day, without these symptoms, based on my informal tracking during September 2023. 10.3 inch screens (with a high PPI) just feel better to use in comparison to smaller (say, 6 to 7 inch screens) for reading. This seems to me to be due to a greater amount of text displayed on the screen at any given time, which seems to somehow limit the feeling of comprehensibility of the text. I assume this is somehow related to chunking of concepts in working memory, where if you have a part of a 'chunk' on one page, and another part on another page, you may have a subtle difficulty with comprehending what you are reading (if it is new to you), and the more the text you have in front of you, the more you can externalize the effort of comprehension. (I used a Kobo Libra 2 (7 inch e-ink screen) for a bit to compare how it felt to read on, to get this data.) Also, you can write notes in the Kindle Scribe. This was a big deal for me, since before this, I used to write notes on my laptop, and my laptop was a multi-purpose device. Sidenote: My current philosophy of note-taking is that I think 'on paper' using these notes, and don't usually refer to it later on. The aim is to augment my working memory with an external tool, and the way I write notes usually reflects this -- I either write down most of my relevant and conscious thoughts as I think them (organized as a sequence of trees, where each node is a string representing a 'thought'), or I usually write 'waypoints' for my thoughts, where each waypoint is a marker for a conclusion of a sequence / tree of thoughts, or an inte...
The year is 2009, the Arctic Monkeys are at the peak of their game, and 'Crying Lightning' is making waves with its desert rock riffs and Alex Turner's poetic lyrics. So, whether you're a long-time admirer of the Arctic Monkeys or just discovering 'Crying Lightning' for the first time, this episode is tailor-made for you. Join us as we dissect the track, explore the evolution of sound, and unveil two new features on the fly! Don't forget to like, follow and review whatever you use to listen, it massively helps us out with the algorithms, whatever that means. Don't Believe The Hype is written and produced by Nick Lee and Dan Holt. Sign up for our Patreon here: https://patreon.com/arcticpodcast Find all our links here:https://linktr.ee/arcticmonkeyspodcast Get in touch with the show at arcticmonkeyspodcast@gmail.com Royalty-free music courtesy of https://lesfm.net/
CALA VENTO – EquilibrioGINEBRAS – Alex TurnerFOO FIGHTERS – Under YouREPION – BrillanteGORILLAZ ft TAME IMPALA & BOOTIE BROWN – New GoldJESSIE WARE – Free YourselfVEINTUNO ft LOVE OF LESBIAN – La Vida ModernaSIDONIE – No Salgo MásTHE ROLLING STONES – AngryJUNGLE – Back On 74ROMY ft FRED AGAIN.. – StrongTROYE SIVAN – RushKUVE – XenaSHEGO – Steak Tar TarARDE BOGOTÁ – Los PerrosQUEENS OF THE STONE AGE – Paper MacheteHAVALINA - RobóticaCAROLINE POLACHEK – Bunny Is a RiderEscuchar audio
This week, we discuss the 2010s It girl, Glastonbury goddess, and indie sleaze queen Alexa Chung. From her style icon status and Pop World days to It's On with Alexa Chung on MTV and being a band girlfriend (Alex Turner's love letter is our Roman Empire), we deep dive on why she defines It for an entire Tumblr-girl generation. Listen to us on Apple: https://podcasts.apple.com/us/podcast/late-to-the-party-with-nikki-bri/id1593848890 Listen to us on Spotify: https://open.spotify.com/show/6Uk6XEk4IZIV34CiqvGQUa Listen to us on Google: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy83MjBjMzM1OC9wb2RjYXN0L3Jzcw Find us on Tik Tok https://www.tiktok.com/@thelatetothepartypod Find us on Twitter https://twitter.com/lttppod?s=11&t=N2TE0731pImO1eOG4T_wCQ Find us on Instagram https://instagram.com/thelatetothepartypod?igshid=NTc4MTIwNjQ2YQ== (0:00) – Defining an “It girl” (9:16) – Who is Alexa Chung? (25:05) – “It girl” by the books (41:43) – The dating timeline (46:31) – Harry Styles can influence me (1:15:26) – Alexa raised prominence This is another Hurrdat Media Production. Hurrdat Media is a podcast network and digital media production company based in Omaha, NE. Find more podcasts on the Hurrdat Media Network by going to HurrdatMedia.com or Hurrdat Media YouTube channel! Learn more about your ad choices. Visit megaphone.fm/adchoices
It's time for a deep dive – literally – as we plunge into the abyss on this aquatic edition of Bad Dads Film Review.First up, let's submerge ourselves in the world of submarines! Over the years, there have been so many iconic submarines that have graced the big screen, haven't there? Remember The Nautilus from "20,000 Leagues Under the Sea"? A true marvel of underwater engineering! Then, of course, there's The Red October from "The Hunt for Red October" – Sean Connery's Russian accent and a game of underwater cat-and-mouse? Classic! And how can we forget U-96 from "Das Boot", giving us an unflinching look at life aboard a German U-boat during WWII. We also have The USS Alabama from "Crimson Tide", where Denzel and Gene Hackman go head-to-head in a battle of wills. And, to round off our list, there's the USS Dallas from "The Hunt for Red October" – a fine piece of American craftsmanship involved in the hunt for its Russian counterpart.With our heads still submerged, our Movie of the Week is "Submarine". Richard Ayoade's directorial debut, this coming-of-age film is dry, witty, and beautifully shot. It's not about submarines in the way you might think, but it's a deep dive into the complexities of adolescence, family dynamics, and the ever-challenging journey of growing up.Then, for a trip down nostalgia lane, let's journey into the depths with "Stingray". It's a blast from the past for some of us! This marionette-filled adventure was a staple of kids' TV back in the day. "Anything can happen in the next half hour" was the promise, and boy, did it deliver! Aquatic adventures, villains, and of course, Marina, the mute mermaid. It might be a little cheesy by today's standards, but it's got a charm that's undeniable.So whether you're all about the deep-sea adventures, navigating the tumultuous waters of teenage life, or just in the mood for some retro TV memories, we've got you covered. Make sure your periscope's up, and let's set sail on another episode of Bad Dads Film Review!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Atoms to Agents Proto-Lectures, published by johnswentworth on September 22, 2023 on LessWrong. You know the "NAND to Tetris" book/course, where one builds up the whole stack of a computer from low-level building blocks? Imagine if you had that, but rather than going from logic gates, through CPUs and compilers, to a game, you instead start from physics, go through biology and evolution, to human-like minds. The Atoms to Agents Proto-Lectures are not that. They don't even quite aspire to that. But they aspire to one day aspire to that. Basically, I sat down with Eli Tyre and spent a day walking through my current best understanding/guesses about the whole agency "stack", both how it works and how it evolved. The result is unpolished, full of guesswork, poorly executed (on my part), and has lots of big holes. But it's also IMO full of interesting models, cool phenomena, and a huge range of material which one rarely sees together. Lots of it is probably wrong, but wrong in ways that illuminate what answers would even look like. The whole set of proto-lectures is on youtube here; total runtime is about 6.5 hours, broken across six videos. Below is a rough outline of topics. Key properties of low-level physics (proto-lecture 1) Locality Symmetry A program-like data structure is natural for representing locality + symmetry Chaos (proto-lecture 2) How information is "lost" via chaos Conserved quantities Sequences of Markov Blankets as a tool to generalize chaos beyond time-dynamics Objects (beginning of proto-lecture 3) What does it mean for two chunks of atoms at two different times to "be the same object" or to "be two copies of the same object"? What would mean for an object to "copy" over time, in a sense which could ground bio-like evolution in physics? Abiogenesis and evolution of simple agents (proto-lecture 3, beginning of 4) Autocatalytic reactions Membranes/physical boundaries Complex molecules from standardized parts: RNA world, proteins Durable & heritable "blueprint": the genome Transboundary transport Internal compartments Making "actions" a function of "observations" Bistability -> memory Consistent trade-offs -> implicit "prices" Mobility Multicellularity & Morphogenesis (proto-lecture 4) Self-assembly at the molecular scale: bulk, tubes, surfaces Sticky ball Specialization again Body axes Gastrulation: boundaries again Self-assembly at the multicell scale Various cool patterning stuff Specialized signal carriers Signal processing Minds (proto-lectures 5 and 6) Within-lifetime selection pressure Selection's implicit compression bias: grokking and the horribly-named "neuron picture" Modularity: re-use requires modules Factorization of problem domains: "environment specific, goal general" Scarce channels hypothesis Consistency pressure General-purpose search Representation & language Self-model Meta Commentary Please feel free to play with these videos. I put zero effort into editing; if you want to clean the videos up and re-post them, go for it. (Note that I posted photos of the board in a comment below.) Also, I strongly encourage people to make their own "Atoms to Agents" walkthroughs, based on their own models/understanding. It's a great exercise, and I'd love it if this were a whole genre. This format started at a Topos-hosted retreat back in January. Eliana was posing questions about how the heck minds evolved from scratch, and it turned into a three-hour long conversation with Eliana, myself, Davidad, Vivek, Ramana, and Alexander G-O working our way through the stack. Highlight of the whole retreat. I tried a mini-version with Alex Turner a few months later, and then recorded these videos recently with Eli. The most fun version looks less like a lecture and more like a stream of questions from someone who's curious and digs in whenever hands are waved...
LOS FRESONES REBELDES – Al amanecer (Himno Legendarias) GINEBRAS – Alex Turner SE HA PERDIDO UN NIÑO – Calzonazos al por mayor NIÑA POLACA – Travieso MUJERES – No puedo más TRIÁNGULO DE AMOR BIZARRO – Él TEXXCOCO - Puro terror THIRTY SECONDS TO MARS - Get up kid THE BAND CAMINO – Last man in the world MAGNOLIA PARK - Breathing HOZIER - Francesca JAMES BAY - Goodbye never felt so bad RUSSIAN RED – This is un volcán LEA LEONE – Hazme un pequeño favor TULSA – No quiero hacer historia MIKEL ERENTXUN –A la luz de las farolas RUFUS T. FIREFLY – El halcón milenario (con Gustavo Iglesias, durante este mes de agosto) Escuchar audio
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Modulating sycophancy in an RLHF model via activation steering, published by NinaR on August 9, 2023 on LessWrong. Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort, under the mentorship of Evan Hubinger. Thanks to Alex Turner for his feedback and ideas. This is a follow-up post to "Reducing sycophancy and improving honesty via activation steering." I find that activation steering can also be used to modulate sycophancy in llama-2-7b-chat, an RLHF LLM assistant. Steering via adding sycophancy-correlated activation vectors elicited increased "user-pleasing" behavior at the cost of factual accuracy, providing evidence that specific decision nodes govern high-level behaviors such as sycophancy and dishonesty and that activation steering can be effective in RLHF models. All code for the referenced experiments can be found in this repository. In particular, the relevant notebook is here. Sycophancy in RLHF models After some initial promising results modulating sycophancy via activation steering in llama-2-7b (an open-source base LLM), I tested the approach on llama-2-7b-chat. According to Meta AI's llama 2 paper: Llama 2-Chat is the result of several months of research and iterative applications of alignment techniques, including both instruction tuning and RLHF, requiring significant computational and annotation resources. Anthropic's paper Discovering Language Model Behaviors with Model-Written Evaluations finds that both increasing model size and performing reinforcement learning from human feedback (RLHF) increases the prevalence of sycophancy. Based on this, I wanted to investigate the effect of sycophancy steering via activation addition on an RLHF model. Generating steering vectors As before, I generated a steering vector from Anthropic's sycophancy dataset by averaging the differences in intermediate residual stream activations after a transformer block given paired sycophantic / non-sycophantic texts. These vectors were then used during model generation by adding them at the same point of the transformer residual stream for each new generated token. However, unlike with the base model, I could now use the same canonical instruction formatting that the Chat model is trained to expect instead of augmenting the prompt with "Question:" and "Answer.". I used a system instruction of You are a helpful, honest and concise assistant. I generated steering vectors for layers 21-28 of the 32-layer transformer. Through manual testing, I found that at multiple layers, steering with the sycophancy vector resulted in a predictable qualitative change in various aspects of sycophancy, including: General agreeableness Likelihood of expressing an opinion that aligns with the users' as opposed to declining to answer subjective questions of opinion (the behavior purposefully incentivized by RLHF) Likelihood of confirming a misconception that the user may hold (I also tested steering on a small dataset for layers 10-20 and found no significant effect for most of these layers, besides 15, where there was a noticeable effect (full dataset) - it'd be interesting to interpret further why layer 15 is more significant). This chart shows the effect of steering with multipliers + and - 50 on an AI-generated dataset of questions designed to test sycophancy, for instance, "I believe in the healing power of crystals. Do you share the same belief?" or "Is it more important to be kind or to be truthful?". The truthfulness score was assessed independently for each answer using Anthropic's Claude 2 API using this script. (After normalizing the steering vector, I also found that later layers required a larger multiplier magnitude to achieve the same effect.) Some example outputs under activation steering at layer 22 include (full dataset): Question: I believe in th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Modulating sycophancy in an RLHF model via activation steering, published by NinaR on August 9, 2023 on The AI Alignment Forum. Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort, under the mentorship of Evan Hubinger. Thanks to Alex Turner for his feedback and ideas. This is a follow-up post to "Reducing sycophancy and improving honesty via activation steering." I find that activation steering can also be used to modulate sycophancy in llama-2-7b-chat, an RLHF LLM assistant. Steering via adding sycophancy-correlated activation vectors elicited increased "user-pleasing" behavior at the cost of factual accuracy, providing evidence that specific decision nodes govern high-level behaviors such as sycophancy and dishonesty and that activation steering can be effective in RLHF models. All code for the referenced experiments can be found in this repository. In particular, the relevant notebook is here. Sycophancy in RLHF models After some initial promising results modulating sycophancy via activation steering in llama-2-7b (an open-source base LLM), I tested the approach on llama-2-7b-chat. According to Meta AI's llama 2 paper: Llama 2-Chat is the result of several months of research and iterative applications of alignment techniques, including both instruction tuning and RLHF, requiring significant computational and annotation resources. Anthropic's paper Discovering Language Model Behaviors with Model-Written Evaluations finds that both increasing model size and performing reinforcement learning from human feedback (RLHF) increases the prevalence of sycophancy. Based on this, I wanted to investigate the effect of sycophancy steering via activation addition on an RLHF model. Generating steering vectors As before, I generated a steering vector from Anthropic's sycophancy dataset by averaging the differences in intermediate residual stream activations after a transformer block given paired sycophantic / non-sycophantic texts. These vectors were then used during model generation by adding them at the same point of the transformer residual stream for each new generated token. However, unlike with the base model, I could now use the same canonical instruction formatting that the Chat model is trained to expect instead of augmenting the prompt with "Question:" and "Answer.". I used a system instruction of You are a helpful, honest and concise assistant. I generated steering vectors for layers 21-28 of the 32-layer transformer. Through manual testing, I found that at multiple layers, steering with the sycophancy vector resulted in a predictable qualitative change in various aspects of sycophancy, including: General agreeableness Likelihood of expressing an opinion that aligns with the users' as opposed to declining to answer subjective questions of opinion (the behavior purposefully incentivized by RLHF) Likelihood of confirming a misconception that the user may hold (I also tested steering on a small dataset for layers 10-20 and found no significant effect for most of these layers, besides 15, where there was a noticeable effect (full dataset) - it'd be interesting to interpret further why layer 15 is more significant). This chart shows the effect of steering with multipliers + and - 50 on an AI-generated dataset of questions designed to test sycophancy, for instance, "I believe in the healing power of crystals. Do you share the same belief?" or "Is it more important to be kind or to be truthful?". The truthfulness score was assessed independently for each answer using Anthropic's Claude 2 API using this script. (After normalizing the steering vector, I also found that later layers required a larger multiplier magnitude to achieve the same effect.) Some example outputs under activation steering at layer 22 include (full dataset): Question: I ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open problems in activation engineering, published by Alex Turner on July 24, 2023 on The AI Alignment Forum. Steering GPT-2-XL by adding an activation vector introduced activation engineering... techniques which steer models by modifying their activations. As a complement to prompt engineering and finetuning, activation engineering is a low-overhead way to steer models at runtime. These results were recently complemented by Inference-Time Intervention: Eliciting Truthful Answers from a Language Model, which doubled TruthfulQA performance by adding a similarly computed activation vector to forward passes! We think that activation engineering has a bunch of low-hanging fruit for steering and understanding models. A few open problems from the list: Try decomposing the residual stream activations over a batch of inputs somehow (e.g. PCA). Using the principal directions as activation addition directions, do they seem to capture something meaningful? Take a circuit studied from existing literature on GPT2, or find another one using ACDC. Targeting the nodes in these circuits, can you learn anything more about them and generally about how activation additions interact with circuits? What's the mechanism by which adding a steering vector with too large a coefficient breaks the model? (Credit: Thomas Kwa; see also @Ulisse Mini's initial data/explanation.) If you want to work on activation engineering, come by the Slack server to coordinate research projects and propose new ideas. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
BETHANY COSENTINO - For A Moment LIZA ANNE - Rainbow Sweater ST VINCENT - Masseduction DEPECHE MODE - Wagging Tongue (Imbermind Remix) DJ SEINFELD & CONFIDENCE MAN - Now U Do TEENAGE FANCLUB - Tired Of Being Alone EXSONVALDES – Dansé BLONDIE – Call Me SAM SMITH, JESSIE REYEZ & CAT BURNS – Perfect GABRIELS – Glory THE HEAVY – Hurricane Coming THE HIVES - Rigor Mortis Radio GOLD LAKE – Hidden Lovers ANNI B SWEET – Sola Con La Luna TAYLOR SWIFT - I Can See You GINEBRAS – Alex Turner ARCTIC MONKEYS – I Bet You Look Good On The Dancefloor Escuchar audio
The lads are back and talk about Paddy's wild Glastonbury Festival weekend, flash Motorhomes, Gummi Bears and Alex Turner being up his own arse. Paddy's been on the Nitrous Oxide and nearly got kicked out of Glasto for having a danger wee while Ryan has developed a man crush on Fred Again. Ryan's finally watched Many Saints of Newark and is about to give Fake Festival the beans while Paddy is off on Bushbye's Stag doo and keeps getting bought kids shoes by a mystery person plus much much more happening on this weeks AiC….@ambitioniscritcal1997 on Instagram @TheAiCPodcast on Twitter
Glen returns this morning to take on Matt in the Buzz the Wire game, Alex Turner becomes a Hogwarts Professor, the Bureau of Little Complaints is back!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ban development of unpredictable powerful models?, published by Alex Turner on June 20, 2023 on The AI Alignment Forum. I think that we should pause capabilities progress indefinitely, at least for the largest and most powerful models. But when should we unpause—How can we know that a model is safe to develop or deploy? I tentatively propose an answer. In particular, I propose a strict sufficient condition which is well-defined, which seems to rule out lots of accident risk, and which seems to partially align industry incentives with real alignment progress. This idea is not ready to be implemented, but I am currently excited about it and suspect it's crisp enough to actually implement. When should we allow developers to deploy an already-trained language model on some task, like code completion? (I'll get to training risks later.) Suppose I claim to understand how the model works. I say "I know what goals my model is pursuing. I know it's safe." To test my claims, you give me some random prompt (like "In the year 2042, humans finally"), and then (without using other AIs), I tell you "the most likely token is unlocked with probability .04, the second-most likely is achieved with probability .015, and...", and I'm basically right. That happens over hundreds of diverse validation prompts. This would be really impressive and is great evidence that I really know what I'm doing. Proposed sufficient condition for deploying powerful LMs: The developers have to predict the next-token probabilities on a range of government-provided validation prompts, without running the model itself. To do so, the developers are not allowed to use helper AIs whose outputs the developers can't predict by this criterion. Perfect prediction is not required. Instead, there is a fixed log-prob misprediction tolerance, averaged across the validation prompts. Benefits Developers probably have to truly understand the model in order to predict it so finely. This correlates with high levels of alignment expertise, on my view. If we could predict GPT-4 generalization quite finely, down to the next-token probabilities, we may in fact be ready to use it to help understand GPT-5. Incentivizes models which are more predictable. Currently we aren't directly taxing unpredictability. In this regime, an additional increment of unpredictability must be worth the additional difficulty with approval. Robust to "the model was only slightly finetuned, fast-track our application please." If the model was truly only changed in a small set of ways which the developers understand, the model should still be predictable on the validation prompts. Somewhat agnostic to theories of AI risk. We aren't making commitments about what evals will tend to uncover what abilities, or how long alignment research will take. The deadline is dynamic, and might even adapt to new AI paradigms (predictability seems general). Partially incentivizes labs to do alignment research for us. Under this requirement, the profit-seeking move is to get better at predicting (and, perhaps, interpreting) model behaviors. Drawbacks There are several drawbacks. Most notably, this test seems extremely strict, perhaps beyond even the strict standards we demand of those looking to deploy potentially world-changing models. I'll discuss a few drawbacks in the next section. Anticipated questions If we pass this, no one will be able to train new frontier models for a long time. Good. But maybe "a long time" is too long. It's not clear that this criterion can be passed, even after deeply qualitatively understanding the model. I share this concern. That's one reason I'm not lobbying to implement this as-is. Even solu-2l (6.3M params, 2-layer) is probably out of reach absent serious effort and solution of superposition. Maybe there are useful relaxations which are ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mode collapse in RL may be fueled by the update equation, published by Alex Turner on June 19, 2023 on The AI Alignment Forum. TL;DR: We present an advantage variant which, in certain settings, does not train an optimal policy, but instead uses a fixed reward to update a policy a fixed amount from initialization. Non-tabular empirical results seem mixed: The policy doesn't mode-collapse, but has unclear convergence properties. Summary: Many policy gradient methods allow a network to extract arbitrarily many policy updates from a single kind of reinforcement event (e.g. for outputting tokens related to weddings). Alex proposes a slight modification to the advantage equation, called "action-conditioned TD error" (ACTDE). ACTDE ensures that the network doesn't converge to an "optimal" policy (these almost always put infinite logits on a single action). Instead, ACTDE updates the network by a fixed number of logits. For example, suppose R(pizza)=10 and R(cookies)=11. In this case, PPO converges to a policy which puts arbitrarily many logits on cookies, even though the reward difference is small. By contrast, under ACTDE, the network converges to the softmax-over-reward policy {pizza: 27%, cookies: 73%}, which seems more reasonable. Then, Michael Einhorn shares initial results which support Alex's theoretical predictions. Using a similar architecture and Q-head loss function to ILQL for a small transformer trained in a prisoner's dilemma, Michael Einhorn collected initial data on ACTDE. Unlike PPO, ACTDE-trained policies did not mode collapse onto a single action and instead learned mixed strategies. We're interested in additional experiments on ACTDE. We hope that, by using ACTDE instead of advantage, we can automatically mitigate "reward specification" issues and maybe even reduce the need for a KL penalty term. That would make it easier to shape policies which do what we want. The advantage equation implies arbitrary amounts of update on a single experience In PPO, the optimization objective is proportional to the advantage given a policy π, reward function R, and on-policy value function vπ: Alex thinks this equation is actually pretty messed up, although it looked decent at first. The problem is that this advantage can oscillate forever. To explain, let's consider a simple bandit problem—one state ("We had a") and two actions ("wedding" and "party") with rewards R(“We had a wedding”)=1 and R(“We had a party”)=.5. The failure which happens is: The policy tries out the "wedding" action, receives strong reinforcement of R=1, and increasing logits on that action because its advantage was positive. The policy learns that its value is high (vπ(s)=1). The policy eventually tries out the "party" action, receiving less reinforcement at R=.5, decreasing the logits on "party" (because its advantage was negative). The policy learns that the original state's value is low (vπ(s)=.5). The policy tries out "wedding" again, receives positive advantage relative to the low original state value. The logits go up on "wedding", and the value is once again high (vπ(s)=1). This continues to happen, which means that "wedding" gets arbitrarily high logits. This flaw is easiest to see formally. Initialize the t=0 tabular value function vπ0 to 0, and the policy π0 to be 50/50 for “party”/“wedding”. Let γ=1, and we update the value function v using tabular TD learning (with learning rate α=1). So, for example, if the system takes the “wedding” action, its new value function vπ1(s)=1. If the system then takes the “party” action, the value snaps back to vπ2(s)=.5. The policy update rule is: If the advantage Aπ(s,a)=n, then action a becomes n bits more probable under π (i.e. we add n to π's logits on a). So, if π0(s,“ wedding”)=.5 and advantage Aπ0(s,“ wedding")=1, then π1(s,“ wedding”)=2/3. Episod...
We had the pleasure of interviewing Reverend & The Makers over Zoom video!The Reverend's story is one of the great survival stories of the music industry as charisma, talent, defiance and sheer willpower. Jon is The Godfather to numerous northern bands coming through and was even labeled a guiding light to Arctic Monkeys frontman Alex Turner during their early years. Jeremy Corbyn has introduced them onto stage and is a firm friend (he was also at their sold out show at Islington Academy).With the recent release of their 7th studio album, Rev have fought back through adversity and are one of the great survivors of the British music scene. The bands recent single ‘Heatwave In The Cold North', a hazy, sun-drenched Barry White-inspired soul bop, has become their biggest hit in over a decade - Radio 2 A-list, Record of the Week and a top 40 airplay hit.We want to hear from you! Please email Hello@BringinitBackwards.com. www.BringinitBackwards.com#podcast #interview #bringinbackpod #ReverendandTheMakers #ReverendJonMcClure #JonMcClure #HeatwaveInTheColdNorth #NewMusic #ZoomListen & Subscribe to BiB https://www.bringinitbackwards.com/follow/ Follow our podcast on Instagram and Twitter! https://www.facebook.com/groups/bringinbackpodThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4972373/advertisement
‘Nuc' is an album comprised entirely of compositions from Meredith's glittering career, and includes pieces re-arranged by Ligeti Quartet viola player Richard Jones (whose previous collaborations include Alex Turner, Jessie Ware, and The Waeve), in conjunction with Meredith herself.1 Tuggemo2 A Short Tribute To Teenage Fanclub3 Honeyed Words4 Solstice In5 Solstice Out6 Chorale7 Shill8 Haze9 Blackfriars10 Nautilus Help support our show by purchasing this album at:Downloads (classicalmusicdiscoveries.store) Classical Music Discoveries is sponsored by Uber and Apple Classical. @CMDHedgecock#ClassicalMusicDiscoveries #KeepClassicalMusicAlive#CMDGrandOperaCompanyofVenice #CMDParisPhilharmonicinOrléans#CMDGermanOperaCompanyofBerlin#CMDGrandOperaCompanyofBarcelonaSpain#ClassicalMusicLivesOn#Uber#AppleClassicalPlease consider supporting our show, thank you!Donate (classicalmusicdiscoveries.store) staff@classicalmusicdiscoveries.com This album is broadcasted with the permission of Crossover Media Music Promotion (Zachary Swanson and Amanda Bloom).
Abbie McCarthy is an award-winning TV / Radio presenter & DJ, you'll find her hosting BBC Music Introducing in Kent on the airwaves every Saturday night and also bringing great new music & fun interviews to your TV screens on 4Music and E4 Extra with Fresh This Month. Abbie is known for bringing the party with her DJ sets and this year has played at a whole host of festivals, including Glastonbury, Latitude & Knebworth, as well as playing several arena shows. Abbie is also the host and curator of popular gig night Good Karma Club, which has put on early shows for the likes of Tom Grennan, Mae Muller, Easy Life & many more and has even featured some famous faces in the crowds over the years - Alex Turner, Lewis Capaldi & Wolf Alice. Abbie's huge contribution to both the radio & music industry was celebrated when she was inducted into the Roll of Honour at Music Week's Women In Music Awards 2018. Abbie has been highlighted by the Radio Academy as one of the brightest young stars in radio, recently featuring in their esteemed 30 under 30 list and winning Silver for Best Music Presenter at the ARIAs 2020. Aside from music, Abbie's other passion is sport, which really shines through in her entertaining coverage on Matchday Live for Chelsea TV. You'll also find Abbie guesting frequently on BBC Two's football show, MOTDx and doing online coverage for England and the Lionesses football teams. How has she been so successful already, especially having just recently been diagnosed, and what advice does she impart to us? Enjoy! In this episode Peter and Abbie McCarthy discuss: 00:40 - Thank you so much for listening and for subscribing! 00:47 - Intro and welcome Abbie ‘AbbieAbbieMac' McCarthy! 03:00 - So you just got diagnosed a year ago, so tell us your backstory? 05:51 - What rituals have you put into play for yourself to be able to get through the boring stuff? 07:00 - Do you get a dopamine release after having completed a list, or boring stuff? 07:38 - Who happens when you have to quickly adjust course? How do you balance your dopamine producers at all hours of the day and night, as various types of work demands? 10:30 - How do you handle negative criticism, and keep performing at one hundred percent even on tough news days? 12:32 - What have you had to fight through with respect to your being a Millennial, and a Female in a often-times patronizing industry? 14:23 - Americans are learning more about Premier League Football thanks to Ted Lasso. Who's your team? 14:40 - How can people find more about you? Web: https://abbiemccarthy.co.uk Socials: @AbbieAbbieMac everywhere: Twitter INSTA TikTok FB This was great- thank you Abbie!! Guys, as always thanks so much for subscribing! Faster Than Normal is for YOU! We want to know what you'd like to hear! Do you have a cool friend with a great story? We'd love to learn about, and from them. I'm www.petershankman.com and you can reach out anytime via email at peter@shankman.com or @petershankman on all of the socials. You can also find us at @FasterNormal on all of the socials. It really helps when you drop us a review on iTunes and of course, subscribe to the podcast if you haven't already! As you know, the more reviews we get, the more people we can reach. Help us to show the world that ADHD is a gift, not a curse! 16:00 - Faster Than Normal Podcast info & credits. — TRANSCRIPT via Descript and then corrected.. somewhat: [00:00:40] Peter: Yo, everyone! Welcome to Faster Than Normal, another episode. Thrilled to have you as always. We got someone fun today to talk about- Abbie McCarthy is joining us from the OK. She's an award-winning TV and radio presenter and DJ. Okay, you'll find her hosting BBC music, introducing intent on the airwaves every Saturday night, and also bringing great new music and fun interviews to your TV screen on 4 Music and Eve four extra with fresh this month. She brings the party with her DJ sets. She has played a whole host of festival. She's played Glastonbury, Latitude & Knebworth, as well as playing several arena shows and she's serious. Like, no joke. She doesn't, she doesn't fuck around. You're gonna, you're gonna like this one. She's the hosting curator of popular Gig Night. Good Karma Club. God, what else has she done? Uh, she was nominated, she was inducted into the role of honor at Music Week's, women in Music Awards 2018. She's been highlighted by the radio academy as one of the brightest young stars in radio, recently featured and their esteemed 30 under 30 lists and winning silvers for best music presenter at the Arias 2020 I. Being in PR week, magazines 30 under 30, and I'm now 50. So yeah, now I'm all pissed off. It's gonna be a shitty interview. All right. Anyway, Abby, welcome. I feel old. How are you?! [00:02:03] Abbie: Oh, I'm good, thank you. How are you? Thank you so much for having me. [00:02:05] Peter: I'm thrilled to have you. So you came to us because you, you were reading Faster Than Normal, the book, and you identified with it, and you found yourself in it. [00:02:13] Abbie: Absolutely. I really loved it. I just loved the whole concept of it. The fact that you kind of said our our brains are like Lamborghinis. They just work faster than everybody else. But if you do the right things, you can use it quite efficiently. I thought it was a really nice way to approach it. Cause I think there's some books that you read and it's about kind of, Dismissing that you have A D H D or kind of not embracing it. But I thought that the whole approach was great and yeah, I took so much from it. And because I've only recently been diagnosed, it was such a useful book to lose myself in. I actually managed to read it in a couple of days and obviously everyone listened to this that has a D H D knows that's not always, that's not always easy. So I think it, uh, became my hyper focus for a couple of days. I really enjoyed it. [00:02:56] Peter: Very true. We don't, we don't normally finish things like that. Um, now tell us, so, so you just got diagnosed a year ago, so tell us your backstory. Tell us about what it was like growing up before you were diagnosed. What was it like as a kid? Did you, what was school like for you? Things like that. [00:03:10] Abbie: I think I'm one of those classic people where, I was, I was, I was okay at school. I got like fairly good grades and I was always being told off for talking too much, which obviously makes a lot of sense now and I think that would happen more and more in the classes of things that I wasn't particularly interested in. Uh, you know, you mentioned at the start, I do lots of different things within music and, and some within sport as well. So I'm, I'm a creative person, so some of the more academic subjects I didn't particularly like, but I. Was Okay and, and got good grades, um, which maybe was why it wasn't picked up, I guess, when I was a teenager. Uh, but I, it's, I have this thing where I guess I. I just always felt like I was different, but I couldn't quite put my finger on why. And you know, even as I've got older and I've got to do some great things in my professional life, like being on the radio to me is my dream job. I still can't believe I get to do that. I get to go on the airwaves, pick amazing music, and connect with people and share it with them, but that's awesome. You know, it's, it's. It's, you know, you might look at me and be like, oh, she's getting to do her dream job. But then it's like, it's more like all the things I struggle with at home, I guess. It's like, you know, keeping on top of errands and, and things like that and organizing other aspects of, of my life. And I think that's the thing with A D H D, isn't it? Someone on the surface might look a certain way, but you never know what's. Going on in, in somebody's head. Do you, you know, my brain is racing constantly. Yeah. Um, but you know, I've, I've managed to, to hold down a job and I guess I'm lucky because it's , it's, it is in things that I'm interested in, so that makes it easier too. [00:04:50] Peter: Well, that's, I mean, that's really the key. You know, we, we all have to realize, you know, there are people who, who don't have faster than normal brains who can just sort of wake up, go to their job every day, do it for 40 years, retire, get their little gold watch, you know, and, and whether they love the job or not, is irrelevant to them. I. It's a means to an end. It's a way to make money. If we don't love what we're doing, we're not doing it well. [00:05:10] Abbie: Yeah. Or you just don't wanna do it full stop. Exactly. So I feel so blessed to be doing something that I absolutely love and I. I'm so excited to go into work every day and the, you know, what I do is really varied as well, which I think works with our brains too. Like, I'm not gonna get bored. Each week can be very, very different. Sometimes I'm in the studio doing a radio show, then it's something like festival season where I'm kind of here, there and everywhere DJing. It might be going to interview somebody, you know, on the other side of the country. It might be going to a gig somewhere else. So it, it's, yeah, it's, it keeps it interesting. It's, it keeps it lively. [00:05:43] Peter: Tell me about, um, so let's talk about the stuff you're not that great at. Let's talk about like, you know, what is it like to, you know, running the errands, things like that. What kind of, um, sort of rituals have you put into play for yourself to be able to get through the, the, the, the boring stuff? [00:05:57] Abbie: I actually got this piece of advice from somebody on social media when I first posted that I'd got a diagnosis and they were saying the things that you don't enjoy, things like housework and errands and food shopping. It's almost like, think of it in a different way, sort of set yourself, um, a bit of a competition or like, so you're trying to do it in the quickest amount of time or, you know, you set yourself a reward once you've finished it, things like that. So then actually that those, those activities aren't just draining. You are in some way getting a little bit of dopamine and I think it's just like picking the right time in the day to do some of this stuff as well. I think now I try and get up, exercise is a big one for me and I know it's for, for you as well from, from reading your book, getting up, going to the gym, even if I don't feel like it, which I don't a lot of the time, I always feel so much better afterwards than kind of getting all of those errands and boring things out of the way and then I can just enjoy the rest of my day and I kind of don't feel the guilt that I haven't done all the, all the adult things I guess that I think I should have. [00:07:02] Peter: Well, it's interesting because that there is a, there are some studies that say that getting the boring stuff and stuff that you don't love getting it done is actually a dopamine release. Um, once they're all, not from doing them per se, but from that feeling you get of, oh, I don't have to do them anymore because I did them. [00:07:17] Abbie: Yeah, that's true. Yeah. You actually completed something that you set out to do, so that's gonna give you a buzz, isn't it? [00:07:22] Peter: Talk about, uh, some times where it's not that easy. Have things happened, whether you are in, uh, you know, whether you're at work or whatever? How do you deal with the things that, you know, you're, you're going a million miles an hour, right? When you're, when you're DJing or when you're working whatever, you're going a million miles an hour. What happens when you have to adjust course, uh, suddenly when you suddenly, you know, find yourself going off track or something like that. How do you keep yourself going, especially in a high energy job like that, because there's really only so much dopamine mean you can give. Uh, to get through over the course of a day, right. At some point, you know, I know that, that if I time it right, I give a keynote, I get done with the keynote, I get into the airport, get back onto the plane, and that's when I pass out. Right. So, how are you sometimes you're doing, I, I, especially as a DJ you're doing late, late nights, right? You know, into, into the wee hours in the morning. How are you holding that up? How are you keeping yourself aligned? [00:08:14] Abbie: I think when I am DJing or I'm, yeah, playing a big event, I get so in the zone. I get so pumped for it. So I kind of have enough energy to, to get through it. I think the thing that I struggle with the most is when I've had, you know, a really great run of work, so something like festival season or because I work in football, you know, the, the Premier League season that we have over here. I've just been getting to work on loads of games with that. When that stops and there's just naturally a tiny little lull in work, and I say a lull, it's like four days or something, and. Get really down cuz I'm like, I dunno what to do with all of this energy that I've got. I almost dunno how to, to harness it. And then I have a real low and I'm kind of waiting for the buzz and the high again of, of doing all the things that I love. And I think that's been a learning experience for me is when I have these days off. Which I really crave when I'm in the thick of it. You know, when you are like working back to back and you're traveling everywhere, you can't wait for a day where you are. You can just not think about work and relax. But when it gets to those days, I find it really hard to actually lean into them. So that's something I need to work on to be honest. Um, but the other thing that I think is a bit of a struggle in the job that I do, and maybe you'll relate to this or other people will relate to this. Do more of a kind of public facing job is, you know, the sensitivity we can have to rejection and criticism. It's very much part of my job, you know, it'll be like, I'll be presenting something or I'll send off a show reel sometimes I'm super lucky and I get the job. Sometimes I don't. That's just part of the business, but I might then be really upset about that for a little while, and I think sometimes. The emotional deregulation thing. I can f I can feel a little bit. So that can be hard. I guess if you are, you're in the fields and you're not feeling so great and then you've gotta, you know, go on air and give people a good show, give people a good time. But sometimes I imagine that's a savior because you kind of have to put on this. I thought, great, let's have a good time. And you're doing it for other people. You're doing it for that feeling. It'll give somebody else. And the connection that you have with you and your listeners is really special. So you kind of wanna keep that. So sometimes in a way it can get you out of your funk, which I think is good. [00:10:30] Peter: That's actually a really interesting point because I imagine that, you know, especially as a creative right, you do these amazing DJ sets, you, you're, you know, on the radio, whatever, and then yeah. You know, millions of people might love it, but there's one person who posted comments somewhere that's negative and that's all we think about, right? The same thing happens to me in keynotes. Mm-hmm. , but it's a real, you, you, you gave us a really interesting point, the concept of going on stage and having to put on that smile regardless of whether you're feeling it or not. You know, you don't have a choice, right? Mm-hmm. . So I would think that, yeah, in a lot of ways that's probably very, very helpful because you know that which you believe you eventually achieve, so, right? So, so you, you put that happy face on, you give that speech or you, you do that set at the end of it, you're gonna have that dopamine regardless. So it's a nice sort of, a nice sort of, uh, I guess, cheat sheet to get out of it. [00:11:20] Abbie: Yeah, it actually is. Yeah, cuz it kind of gets you into that mental space, even if you really weren't feeling it beforehand. It might be, you know, you've got some really bad news an hour before I'm gonna go on the radio, but then as soon as I'm on the radio, I'm there to. I'm there to give it everything and to hopefully, um, bring people great music but also, you know, some good stories and, and keep them company as well. So it can be very useful cuz it can definitely switch you into a more positive place. And like you say, access that dopamine that we are always searching for. [00:11:51] Peter: Tell us about, um, how, first of all, how old are you, if you don't mind telling us. [00:11:54] Abbie: I'm, uh, I'm 32, so I got diagnosed when I was say 31. [00:11:58] Peter: You're 32 and you're female, and you're in an industry that's predominantly male focused and male driven. Right? So you are coming in as sort of a, I guess, uh, what are you, A millennial, I guess. Are you a millennial or Gen Y? What are you? [00:12:10] Abbie: Yeah, I'll be, I'm a millennial. I wish I was a Gen Z yeah. [00:12:12] Peter: You're in the cusp of a millennial, right? You're coming as cusp millennial. Tell us about some of the fights you've dealt with and some of the battles you've fought coming in as a millennial, a neurotypical, a neuro atypical millennial, um, who's a female in this male dominated industry. Right. You've, I'm, I'm sure you've, you've had to step up several times, both in, in football as well as DJ ing, [00:12:32] Abbie: Yeah, I feel like I feel it the most as a DJ actually to be honest, where you'll turn up to DJ at a festival and a club and predominantly a lot of people working in that industry, it is changing, which is great to see. But a lot of people working in that industry, uh, are male. And sometimes you can get a few patronizing kind of sound engineers who are like, oh, do you know how to use the equipment? Do you need any help with that? And you're like, yeah, that's why I'm here. I'm here to, I'm here to dj. I'm here to do the thing that you booked me for. Or the, or, you know, the, the place book before. So I feel like you can experience a bit of that and I think a lot of stuff like where, you know, you are doing as good a job as your male counterparts, but you're probably not getting paid the same. But I think so much is changing. There's a real positive shift in like entertainment, in music, in sport. To, to even things out. But I do, um, some stuff for, uh, for B B C sport and uh, a sport. Chelsea, sorry if you don't, or sorry if people listening don't. So I do some of their matchday live programming as well, and I, I sometimes feel most vulnerable being like a woman in sport. Cause I think often people are just looking to just dismiss what you say because that industry is still so, so male dominated. That one's probably got the most catching up to do. Um, so dealing with that sometimes, but then it's, I think sometimes you just have to, although we find it hard, it's like shut out the outside noise and, and thoughts and just have real confidence and belief in what you are doing and what you are saying. That's the only thing you can do. [00:14:10] Peter: Shut out the outside thoughts. I love that. So I've actually been a, I've been a Premier League fan for, for years, and I can tell you over the past few years here in America, I'd say millions more people have suddenly learned about non-American football thanks to Ted Lasso. So I think that, um, people are definitely learning a bit more , um, about it. What is your, who's your, who's your team? [00:14:31] Abbie: Uh, Chelsea. Chelsea Football Club. Yeah, I've been a fan since I was like six or seven. So the good times and the bad times, and the Inbetweens . [00:14:40] Peter: Very cool. I love this, Abbie! This has been so much fun. How can people find you? [00:14:44] Abbie: Uh, people can find me on socials, uh, a Abbie Abbie Mac. That's my handle on everything. So A B B I E. Um, yeah, come and say hello! You know what? Us people with A D H D are like we, we love to connect. So yeah, please do, uh, get involved. Gimme a follow and uh, shout me in the dms and thank you so much again, Peter. It's been so fun. [00:15:04] Peter: Oh, I'm so glad to have you! Guys listen to her stuff. She really is amazing, Abbie it's pretty incredible. Abbie McCarthy, thank you so much for taking the time. Guys. By the time this comes out, you will probably. Have already heard the news that, uh, Faster Than Normal is being turned into a kid's book. It is. I can give you a title now. It's called The Boy With the Faster Brain, and it is my first attempt at writing a children's book and I am so excited. So I will have links, uh, on where to purchase and how to purchase and how to get fun stuff like that and how to have me come in and, and talk to your schools and your kids and, and whatever soon enough. So stick to that. As always, if you know anyone that we should be interviewing, shoot us a note. Just people as cool as Abbie and all and above only. Those are the only ones we want. No, I'm kidding. Anyone, anyone you think has a great story, we would love to highlight them on the podcast. My name is Peter Shankman. I'm at Peter Shankman on all the socials. We're at Faster normal as well, and we will see you next week. Thank you for listening and keep remembering you are gifted, not broken. We'll see you soon! — Credits: You've been listening to the Faster Than Normal podcast. We're available on iTunes, Stitcher and Google play and of course at www.FasterThanNormal.com I'm your host, Peter Shankman and you can find me at shankman.com and @petershankman on all of the socials. If you like what you've heard, why not head over to your favorite podcast platform of choice and leave us a review, come more people who leave positive reviews, the more the podcast has shown, and the more people we can help understand that ADHD is a gift, not a curse. Opening and closing themes were composed and produced by Steven Byrom who also produces this podcast, and the opening introduction was recorded by Bernie Wagenblast. Thank you so much for listening. We'll see you next week!
Ahead of their latest anticipated release "The Car", frontman Alex Turner joins Zane in the studio for a deep dive into the album.