POPULARITY
Pixelated Audio is back from another VGMCon in Minnesota, this time about space! Joining Gene once again from last year are Pernell from Rhythm and Pixels and Carlos from Heroes Three! We did a similar panel last year; we take a broad theme and make it educational, and then use it as an excuse to cram in all sorts of VGM examples. We had some tech issues and lost some of the recording but we were able to rescue it with a more laid back "part 2" we recorded in the hotel room as a way to fill in gaps and riff for a while. Prepare for launch on this massive double episode clocking in at almost 2 hours! We discuss space, its influence on games and computing, and a wide range of games that have a direct or sometimes tenuous link to space. We've got a ton of music this episode so we hope you enjoy it. Pardon the noise in the background, it was the best we could do with a portable recorder. If you'd like to follow along with our slides that we used during the talk, feel free to click through. If the presentation doesn't show up below, click here. Track list 0:00:00 (Bedding) Map Screen - Mass Effect (Various, 2007) - Sam Hulick 0:06:15 (Excerpt) introduction to "From Here to Infinity" narrated by Patrick Stewart 0:09:42 Intro Cutscene - Frontier: Elite II (Amiga, 1993) - David Lowe 0:27:18 Hyperspace - Star Control II (PC, 1992) - Riku Nuottajärvi 0:31:57 Alpine Start - Star Trek: New Worlds (PC, 2000) - Jeremy Soule, Inon Zur 0:35:19 Main Theme - Space Engineers (PC, 2019) - Karel Antonín 0:41:11 Happy Daymare - Xexex (Arcade, 1991) - Konami Kukeiha Club Motoaki Furukawa (Carol Queen) Hidenori Maezawa (Michael Oldriver) Satoko Miyawaki (Rosetta Stone) Akiko Hashimoto (Shanghai Manmos) 0:45:13 Space Port -Juggernaut- - X-Men: Children of the Atom (Arcade, 1994) Takayuki Iwai (ANACHEY··TAKAPON··) [main composer on track] Shun Nishigaki (SYUN Nishigaki) Hideki Okugawa (HIDEKI OK) Isao Abe (Isao ··oyaji··Ave) 0:47:44 Hydra (Stage 1) - Gradius IV (Arcade, 1999) - Konami Kukeiha Club Atsuki Watanabe (Atsuki) Harumi Ueko 0:52:09 Pretty G - GuitarFreaks / DrumMania (Arcade, 2000) - Zonlu (Sunday Records) 0:57:17 Space - Chō Jikū Yōsai Macross: Scrambled Valkyrie (SFC, 1993) - Noboru Yamane 1:04:48 I Want to See the Starlight (Jolyne Kujo's Theme) - JoJo's Bizarre Adventure: All Star Battle (PS3, 2013) - Chikayo Fukuda 1:11:50 UFO Ending - Silent Hill (PS1, 1999) - Akira Yamaoka 1:18:06 (Bedding) World 2 (Moon) - Treasure Master (NES, 1991) - Tim Follin 1:20:00 (Bedding) Team Selection - Blades of Steel (NES, 1988) Shinya Sakamoto (S. Sakamoto) Kazuki Muraoka (K. Muraoka) Atsushi Fujio (A. Fujio) Kiyohiro Sada (K. Sada) 1:21:30 (Bedding) Item Acquisition - Super Metroid (SNES, 1994) - Kenji Yamamoto 1:21:42 (Bedding) Welcome to Chessmaster - The Chessmaster (SNES, 1991) - Peter Stone 1:22:24 (Bedding) Item Acquisition - Metroid Fusion (GBA, 2002) - Minako Hamano, Akira Fujiwara 1:22:41 (Bedding) Title Theme - Sorry! / Aggravation / Scrabble Junior (GBA, 2005) - Mark Cooksey (possibly) 1:23:46 (Bedding) Course Clear (Out of This Dimension) - Star Fox (SNES, 1993) - Hajime Hirasawa 1:24:11 (Bedding) Title Theme - PGA Tour Golf (SNES, 1992) - Rob Hubbard 1:24:50 Title Theme - PGA Tour Golf (Genesis, 1991) - Rob Hubbard 1:28:53 (Bedding) Deep Core - Phalanx (SNES, 1991) - S. Yamaguchi 1:29:32 (Bedding) Dr. Wright - SimCity (SNES, 1991) - Soyo Oka 1:34:12 (Bedding) Title Theme - Space Shuttle Project (NES, 1991) - Scott Marshall 1:34:39 (Bedding) Round 1 – Remlia Castle - Astyanax (NES, 1989) - Toshiko Tasaki 1:37:18 (Excerpt) pre-launch voice clips - Space Shuttle Project (NES, 1991) - voiced by David Crane 1:45:29 Ball Launch + Main Play 2 - Star Trek 25th Anniversary (Pinball, 1991) - Brian Schmidt 1:47:52 The Moon - Ducktales Remastered (Various,
Joining the conversation are: David Girolmo (who plays Captain E.J. Smith), Heidi Kettenring (playing Ida Strauss), Mark David Kaplan (playing Isador Strauss), James Earl Jones, II (Playing Edgar Beane and others) and Joel Gelman (playing John Jacob Astor IV and 2nd Officer Charles Lightoller). The power of this musical by Maury Yeston and Peter Stone is […]
This week we discuss Emma Nelson and Peter Stone from Degrassi: The Next Generation. Follow us on Instagram and Twitter @thetvdeepdive. Check out our Patreon: patreon.com/TheTVDeepDiveEmail us at thetvdeepdive@gmail.com with any comments or suggestions!
Trigger warning, nothing is what you believe it to be.$ BTC 95,172Block Height 893,933Today's guest on the show is Peter Stone who joins me to talk about how we have been tricked into a life of bonded slavery.Why does Peter believe we are illiterate and how have we been tricked in more ways than you could possibly imagine?Why is voting a scam and why should we absolutely stop referring to ourselves as Plebs?What is the Plebtorial system and why is the Statue Of Liberty a sick joke?We touch on many more topics in this one and I hope it stirs up some more conversations!Learn more about Peter and his work here:https://www.thesovereignproject.live/Find Peter's books here:https://www.thesovereignproject.live/booksALL LINKS HERE - FOR DISCOUNTS AND OFFERS - https://vida.page/princey - https://linktr.ee/princey21mPleb Service Announcements.@orangepillappThat's it, that's the announcement.https://signup.theorangepillapp.com/opa/princeySupport the pods via @fountain_app -https://fountain.fm/show/2oJTnUm5VKs3xmSVdf5n The Once Bitten You Tube Channel:https://www.youtube.com/@Princey21mShills and Mench's:CONFERENCES 2025;BITCOIN IRELAND - DUBLIN - 23RD MAY 2025.https://www.bitcoinireland.eu/ USE CODE BITTEN - 10%BTC PRAGUE 19TH - 21ST JUNE 2025https://btcprague.com/USE CODE BITTEN - 10%BTC HELSINKI 15TH - 16TH AUGUST 2025https://btchel.com/USE CODE BITTEN - 10%PAY WITH FLASH.Accept Bitcoin on your website or platform with no-code and low-code integrations.https://paywithflash.com/RELAI - STACK SATS - www.relai.me/Bitten Use Code BITTENBITBOX - SELF CUSTODY YOUR BITCOIN - www.bitbox.swiss/bitten Use Code BITTENZAPRITE - https://zaprite.com/bitten - Invoicing and accounting for Bitcoiners - Save $40 SWAN BITCOIN - www.swan.com/bitten KONSENSUS NETWORK - Buy bitcoin books in different languages. Use code BITTEN for 10% discount - https://bitcoinbook.shop?ref=bitten SEEDOR STEEL PLATE BACK-UP - @seedor_io use the code BITTEN for a 5% discount. www.seedor.io/BITTENSATSBACK - Shop online and earn back sats! https://satsback.com/register/5AxjyPRZV8PNJGlM HEATBIT - Home Bitcoin mining - https://www.heatbit.com/?ref=DANIELPRINCE - Use code BITTEN.CRYPTOTAG STEEL PLATE BACK-UP https://cryptotag.io - USE CODE BITTEN for 10% discount.
A master class on true law! Robert Michael and Peter Stone deconstruct the legal system and the foundations of the false legal identity along with common misconceptions.SUPPORT THIS WORK:ALOPodcast on CashApp (Preferred)BuyMeACoffee.com/PatrickBlack
Peter Stone is the founder of The Sovereign Project which is an institution that protects and reclaims the rights and freedom of each individual by providing powerful tools and education, while uniting others who also choose to be free. He has been on previous Episodes #129 #141 #181 ======= Awakening Podcast Social Media / Coaching My Other Podcasts https://roycoughlan.com/ Health & Wellness Products https://partnerco.world/ My Website https://partner.co/?custid=N6543249 Our Facebook Group can be found at https://www.facebook.com/royawakening ============ About my Guest: Peter is the founder of The Sovereign Project which is an institution that protects and reclaims the rights and freedom of each individual by providing powerful tools and education, while uniting others who also choose to be free. There are two states a person can be in this world: you are either sovereign or a slave; the choice is only yours to make. Declaring you are sovereign, that your status is as a free man or woman, requires courage, fortitude and the will to stand up for your rights. This task is much less daunting when you're united with and supported by others of like mind. What we Discussed: - Be Careful of those charging for courses (2:40 min) - Is there controled Opposition (5 mins) - Symbolism (7 mins) - What his Event was about (8 mins) - The Government is a Corporation (9 mins) - The Sovereign Wiki (11:30 mins) - What is a Duns Number (12 mins) - Dun & Bradstreet is Global (14:30 mins) - Is there ways of getting the Money back when we know the City or Government tricked us (18 mins) - The Council can not issue liability orders (20 mins) - Allodial Title (23 mins) - They grap the Title to said Asset (26 mins) - The King goes against God if he claims all the land ( 28 mins) - People means Chattle (29 mins) - You can not change your address as it is theirs (30 mins) - The trickery with Plots and Lots (32 mins) - How to claim the land (32:40 mins) - State your claim (35 mins) - Putting your land on the Blockchain (38 mins) - Stopping a building using your law (41 mins) - Disney Land Fraud (44:40 mins) - Tricked into Informed Consent (46 mins) - Arbitration (47:30 mins) - Sovereign Court (49 mins) - Microsoft was removing my Spooky2 from my laptop (50:20 mins) - Starpage (51:30 mins) - They are Pushing Windows 11 with Spywear (53 mins) - Encryption apps are not encrypted (55 mins) - 2 Stage encryption is data mining (58 mins) - The dangers of New Car Technology (59 mins) - Products designd to break fast (1hr) - Using their Law when needed (1 hr 2 mins) - How do you define 'We the People' (1 hr 3:30 mins) - Never say' You Have Violated my Constitutional Rights ' (1 hr 4:30 mins) - His new Book (1 hr 7 mins) - No receipts from Post Office for Duty Tax ( 1hr 8:30 mins) - Nothing in the Legal System is Lawful (1 hr 12 mins) - Postal Service Trickery (1 hr 15 mins) - Trusts (1 hr 20 mins) ==================== How to Contact Peter Stone: https://www.thesovereignproject.live/ =============== Awakening Podcast Social Media / Coaching My Other Podcasts https://roycoughlan.com/ Health & Wellness Products https://partnerco.world/ My Website https://partner.co/?custid=N6543249 Our Facebook Group can be found at https://www.facebook.com/royawakening
Artificial intelligence tools might transform education, for example, by giving every student 24/7 access to an affordable tutor that's an expert in any subject and infinitely patient and supportive. But what if these AI tools give bad information or relieve students of the kind of critical thinking that leads to actual learning? And what's the point of paying to go to college if you can learn everything from AI chatbots?Today on the show we have Art Markman—Vice Provost for Academic Affairs and a professor of psychology and marketing at the University of Texas at Austin. He's also co-host of the public radio program and podcast “Two Guys on Your Head.” And we also have K.P. Procko—an associate professor of instruction in biochemistry who uses AI in the classroom and who also manages a grant program in UT Austin's College of Natural Sciences to help faculty integrate AI tools into the classroom.Dig DeeperA Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Can't Be Done. Ed Surge (One researcher gave up on expert AI tutors for students, saying the tech is still decades away, and instead is focusing on AI tools to help human teachers do a better job)Opinion: An ‘education legend' has created an AI that will change your mind about AI, Washington Post (AI columnist Josh Tyrangiel says a popular AI-based math tutor “is the best model we have for how to develop and implement AI for the public good. It's also the first AI software I'm excited for my kids to use.”)Will Chatbots Teach Your Children?, New York Times (An overview of the potential benefits and risks of AI-based tutors, as well telling hype from reality)Will Artificial Intelligence Help Teachers—or Replace Them?, Ed Week (features UT Austin's Peter Stone, who argues the calculator didn't replace math teachers, it just required them to change the way they teach; the same will be true with AI tools.) Opinion: College students are dropping out in droves. Two sisters could fix that., Washington Post (One company is using AI to help universities regularly check in with and support students to boost retention.)Episode CreditsOur co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT's Digital Writing & Research Lab.Executive producers are Christine Sinatra and Dan Oppenheimer. Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.Cover image for this episode generated with Midjourney, a generative AI tool. About AI for the Rest of UsAI for the Rest of Us is a joint production of The University of Texas at Austin's College of Natural Sciences and College of Liberal Arts. This podcast is part of the University's Year of AI initiative. The opinions expressed in this podcast represent the views of the hosts and guests, and not of The University of Texas at Austin. You can listen via Apple Podcasts, Spotify, Amazon Podcasts, RSS, or anywhere you get your podcasts. You can also listen on the web at aifortherest.net. Have questions or comments? Contact: mairhart[AT]austin.utexas.edu
We're taking a look at one of the most suspenseful crime dramas of the 1970s. Brandon is joined by lawyer and journalist, Matt Belenky, to discuss 1974's The Taking of Pelham One Two Three. Brandon and Matt praise the deliberately paced direction by Joseph Sargent. The movie perfectly mixed suspense with quirky comedy thanks to the script by Peter Stone. We also praise the performances from the ensemble cast including Walter Matthau, Robert Shaw, Martin Balsam, Jerry Stiller, Hector Elizondo and Earl Hindman.
Pat explored this provocative question with George Monbiot, Guardian columnist and author, and Peter Stone, Associate Professor of Political Science at Trinity College Dublin.
For our first episode, we're starting with the big picture. What is (or isn't) “artificial intelligence”? How can we be sure AI is safe and beneficial for everyone? And what is the best way of thinking about working with AI right now, no matter how we use it?Here with all the answers is Peter Stone. He's a professor of computer science at UT Austin, director of Texas Robotics, the executive director of Sony AI America and a key member in the 100 Year Study on AI. He's worked for many years on applications of AI in robotics: for example, soccer-playing robots, self-driving cars and home helper robots. He's also part of UT Austin's Good Systems initiative, which is focused on the ethics of AI.Dig DeeperAn open letter signed by tech leaders, researchers proposes delaying AI development, NPR (interview with Peter Stone)AI's Inflection Point, Texas Scientist (an overview of AI-related developments at UT Austin)Experts Forecast the Changes Artificial Intelligence Could Bring by 2030 (About the first AI100 study, which Peter Stone chaired)Computing Machinery and Intelligence (Alan Turing's 1950 article describing the Imitation Game, a test to determine if a machine has human intelligence) Good Systems (UT Austin's grand challenge focused on designing AI systems that benefit society)Year of AI – News & Resources (News from an initiative showcasing UT Austin's commitment to developing innovations and growing leaders to navigate the ever-evolving landscape brought about by AI.)Episode CreditsOur co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT's Digital Writing & Research Lab.Executive producers are Christine Sinatra and Dan Oppenheimer. Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.Cover image for this episode generated with Adobe Firefly, a generative AI tool. About AI for the Rest of UsAI for the Rest of Us is a joint production of The University of Texas at Austin's College of Natural Sciences and College of Liberal Arts. This podcast is part of the University's Year of AI initiative. The opinions expressed in this podcast represent the views of the hosts and guests, and not of The University of Texas at Austin. You can listen via Apple Podcasts, Spotify, Amazon Podcasts, RSS, or anywhere you get your podcasts. You can also listen on the web at aifortherest.net. Have questions or comments? Contact: mairhart[AT]austin.utexas.edu
Stephen Cole is back on StoryBeat for the second time. An award-winning writer of musical theatre, non-fiction books, short stories, and novels, Stephen's work has been recorded, published, and produced worldwide, from New York City to London to the Middle East and Australia. With Matthew Ward he wrote the musicals After The Fair, Merlin's Apprentice, Rock Odyssey, and Casper (which originally starred Chita Rivera), The Night of the Hunter and Saturday Night at Grossinger's (with music by Claibe Richardson), and Dodsworth and Time After Time (with music by Jeff Saver), which has recently been revived at the Children's Theatre of Cincinnati. In 2005 Stephen and composer David Krane were commissioned to write the first American musical to premiere in the Middle East. The result was Aspire, which was produced in Qatar. Their hilarious cross-cultural experiences resulted in another show titled The Road To Qatar! which has been produced in Dallas, New York and the Edinburgh International Festival (where it was nominated for Best Musical). His most recent musical, Goin' Hollywood, was produced in 2023 to rave reviews and sold-out audiences in Dallas.Stephen has written continuity, narration, and special material for fifteen different Drama League Shows including all-star tributes to Kander and Ebb, Liza Minnelli, Chita Rivera, Liz Smith, Peter Stone, Angela Lansbury, Patti LuPone, Kristin Chenoweth, Audra McDonald and Neil Patrick Harris. As an author, Stephen has published That Book About That Girl, I Could Have Sung All Night: the Marni Nixon story (which is currently in development as a feature film), Noel Coward: A Bio Bilbliography, and the Charles Strouse memoir Put On a Happy Face. A prolific short story writer, Stephen's first novel Mary & Ethel…and Mikey Who? was published in January 2024. I've read Mary & Ethel…and Mikey Who? It's what's you call a real hoot, especially for lovers of old broads on old Broadway. It's the most entertaining time-slipping story I've read since Kurt Vonnegut's Slaughterhouse Five.Stephen is a recipient of a Gilman-Gonzales Falla Commendation for musical theatre as well as the prestigious Edward Kleban Award. www.stephencolewriter.orghttps://www.facebook.com/steve.cole.5076798 https://www.instagram.com/stephencolewrit
506 - Is this a dream come true? Degrassi superfans @EeveePacini and @Jocelyn Claybourne sit down for their weekly TikTok live event - when a familiar face joins in. Hangout with Jamie Johnston (Peter Stone) in this surprising interview. If you love Degrassi, this podcast is for you! Degrassi Fan Checklist: Visit patreon.com/degrassikid to watch the full video and support the podcast! Follow @Jamie Johnston on TikTok! @johnstjn! Follow @eeveepacini --- Send in a voice message: https://podcasters.spotify.com/pod/show/degrassikid/message
This episode is exclusively for paid subscribers. If you listen on Apple podcast then please sign up to Spotify or my Paid Substack. To make sure you don't miss any episodes please subscribe here: IMPORTANT NOTICE Following my cancellation for standing up for medical ethics and freedom, my surgical career has been ruined. I am now totally dependent on the support of my listeners, YOU. If you value my podcasts, please support the show so that I can continue to speak up by choosing one or both of the following options - Buy me a coffee If you want to make a one-off donation. Join my Substack To access additional content, you can upgrade to paid from just £5.50 a month About this conversation: In this conversation, Peter Stone from the Sovereign Project discusses the abstract nature of governments and legal fiction, highlighting how these concepts are mere thought processes and do not physically exist. He explains the scam behind legal fictions and how people are tricked into believing in the existence of entities like HR departments and governments. Pete emphasizes the importance of questioning mandates and authority, urging individuals to seek the living, breathing individuals behind legal fictions. He explores the ideal world without governments and laws, where individuals are sovereign and responsible for their own lives. Pete also delves into the purpose and creation of birth certificates, the significance of affidavits, and the role of the Vatican in the financial system. Pete discusses various topics related to government systems and personal sovereignty. He exposes the fraudulent practices of council tax departments, highlighting the use of fake court summons. Pete also addresses the issue of scams and individuals taking advantage of those seeking to understand the truth. He introduces the Sovereign Project, which aims to educate people about their rights and provide resources to fight against unlawful actions. The conversation delves into the funding of public services and reveals how council tax is misused to finance unrelated expenses, including terrorism. Pete shares his vision of an ideal world with minimal external intervention and emphasizes the importance of individual sovereignty and personal responsibility.
We're doing something unusual with a re-do of the third episode of The Projection Booth. Yes, we're talking about Joseph Sargent's 1974 film The Taking of Pelham One Two Three. Based on the book by John Godey (pen name of Morton Freedgood) and brilliantly adapted by Peter Stone, the film stars Walter Matthau as Zach Garber, a transit transit cop whose train is taken hostage by Robert Shaw as Mr. Blue, and his three henchmen, Mr. Green, Mr. Grey, and Mr. Brown. Duane Swierczynski (California Bear) and Keith Gordon join Mike to discuss the film and its two remakes while actor Sal Viscuso talks about his role in the original film.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-projection-booth-podcast--5513239/support.
We're doing something unusual with a re-do of the third episode of The Projection Booth. Yes, we're talking about Joseph Sargent's 1974 film The Taking of Pelham One Two Three. Based on the book by John Godey (pen name of Morton Freedgood) and brilliantly adapted by Peter Stone, the film stars Walter Matthau as Zach Garber, a transit transit cop whose train is taken hostage by Robert Shaw as Mr. Blue, and his three henchmen, Mr. Green, Mr. Grey, and Mr. Brown. Duane Swierczynski (California Bear) and Keith Gordon join Mike to discuss the film and its two remakes while actor Sal Viscuso talks about his role in the original film.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-projection-booth-podcast--5513239/support.
In 2023 we did a few Fundamentals episodes covering Benchmarks 101, Datasets 101, FlashAttention, and Transformers Math, and it turns out those were some of your evergreen favorites! So we are experimenting with more educational/survey content in the mix alongside our regular founder and event coverage. Pls request more!We have a new calendar for events; join to be notified of upcoming things in 2024!Today we visit the shoggoth mask factory: how do transformer models go from trawling a deeply learned latent space for next-token prediction to a helpful, honest, harmless chat assistant? Our guest “lecturer” today is ; you might know him from his prolific online writing on and Twitter, or from his previous work leading RLHF at HuggingFace and now at the Allen Institute for AI (AI2) which recently released the open source GPT3.5-class Tulu 2 model which was trained with DPO. He's widely considered one of the most knowledgeable people on RLHF and RLAIF. He recently gave an “RLHF 201” lecture at Stanford, so we invited him on the show to re-record it for everyone to enjoy! You can find the full slides here, which you can use as reference through this episode. Full video with synced slidesFor audio-only listeners, this episode comes with slide presentation along our discussion. You can find it on our YouTube (like, subscribe, tell a friend, et al).Theoretical foundations of RLHFThe foundation and assumptions that go into RLHF go back all the way to Aristotle (and you can find guidance for further research in the slide below) but there are two key concepts that will be helpful in thinking through this topic and LLMs in general:* Von Neumann–Morgenstern utility theorem: you can dive into the math here, but the TLDR is that when humans make decision there's usually a “maximum utility” function that measures what the best decision would be; the fact that this function exists, makes it possible for RLHF to model human preferences and decision making.* Bradley-Terry model: given two items A and B from a population, you can model the probability that A will be preferred to B (or vice-versa). In our world, A and B are usually two outputs from an LLM (or at the lowest level, the next token). It turns out that from this minimal set of assumptions, you can build up the mathematical foundations supporting the modern RLHF paradigm!The RLHF loopOne important point Nathan makes is that "for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior". For example, it might be difficult for you to write a poem, but it's really easy to say if you like or dislike a poem someone else wrote. Going back to the Bradley-Terry Model we mentioned, the core idea behind RLHF is that when given two outputs from a model, you will be able to say which of the two you prefer, and we'll then re-encode that preference into the model.An important point that Nathan mentions is that when you use these preferences to change model behavior "it doesn't mean that the model believes these things. It's just trained to prioritize these things". When you have preference for a model to not return instructions on how to write a computer virus for example, you're not erasing the weights that have that knowledge, but you're simply making it hard for that information to surface by prioritizing answers that don't return it. We'll talk more about this in our future Fine Tuning 101 episode as we break down how information is stored in models and how fine-tuning affects it.At a high level, the loop looks something like this:For many RLHF use cases today, we can assume the model we're training is already instruction-tuned for chat or whatever behavior the model is looking to achieve. In the "Reward Model & Other Infrastructure" we have multiple pieces:Reward + Preference ModelThe reward model is trying to signal to the model how much it should change its behavior based on the human preference, subject to a KL constraint. The preference model itself scores the pairwise preferences from the same prompt (worked better than scalar rewards).One way to think about it is that the reward model tells the model how big of a change this new preference should make in the behavior in absolute terms, while the preference model calculates how big of a difference there is between the two outputs in relative terms. A lot of this derives from John Schulman's work on PPO:We recommend watching him talk about it in the video above, and also Nathan's pseudocode distillation of the process:Feedback InterfacesUnlike the "thumbs up/down" buttons in ChatGPT, data annotation from labelers is much more thorough and has many axis of judgement. At a simple level, the LLM generates two outputs, A and B, for a given human conversation. It then asks the labeler to use a Likert scale to score which one it preferred, and by how much:Through the labeling process, there are many other ways to judge a generation:We then use all of this data to train a model from the preference pairs we have. We start from the base instruction-tuned model, and then run training in which the loss of our gradient descent is the difference between the good and the bad prompt.Constitutional AI (RLAIF, model-as-judge)As these models have gotten more sophisticated, people started asking the question of whether or not humans are actually a better judge of harmfulness, bias, etc, especially at the current price of data labeling. Anthropic's work on the "Constitutional AI" paper is using models to judge models. This is part of a broader "RLAIF" space: Reinforcement Learning from AI Feedback.By using a "constitution" that the model has to follow, you are able to generate fine-tuning data for a new model that will be RLHF'd on this constitution principles. The RLHF model will then be able to judge outputs of models to make sure that they follow its principles:Emerging ResearchRLHF is still a nascent field, and there are a lot of different research directions teams are taking; some of the newest and most promising / hyped ones:* Rejection sampling / Best of N Sampling: the core idea here is that rather than just scoring pairwise generations, you are generating a lot more outputs (= more inference cost), score them all with your reward model and then pick the top N results. LLaMA2 used this approach, amongst many others.* Process reward models: in Chain of Thought generation, scoring each step in the chain and treating it like its own state rather than just scoring the full output. This is most effective in fields like math that inherently require step-by-step reasoning.* Direct Preference Optimization (DPO): We covered DPO in our NeurIPS Best Papers recap, and Nathan has a whole blog post on this; DPO isn't technically RLHF as it doesn't have the RL part, but it's the “GPU Poor” version of it. Mistral-Instruct was a DPO model, as do Intel's Neural Chat and StableLM Zephyr. Expect to see a lot more variants in 2024 given how “easy” this was.* Superalignment: OpenAI launched research on weak-to-strong generalization which we briefly discuss at the 1hr mark.Note: Nathan also followed up this post with RLHF resources from his and peers' work:Show Notes* Full RLHF Slides* Interconnects* Retort (podcast)* von Neumann-Morgenstern utility theorem* Bradley-Terry model (pairwise preferences model)* Constitutional AI* Tamer (2008 paper by Bradley Knox and Peter Stone)* Paul Christiano et al. RLHF paper* InstructGPT* Eureka by Jim Fan* ByteDance / OpenAI lawsuit* AlpacaEval* MTBench* TruthfulQA (evaluation tool)* Self-Instruct Paper* Open Assistant* Louis Castricato* Nazneen Rajani* Tulu (DPO model from the Allen Institute)Timestamps* [00:00:00] Introductions and background on the lecture origins* [00:05:17] History of RL and its applications* [00:10:09] Intellectual history of RLHF* [00:13:47] RLHF for decision-making and pre-deep RL vs deep RL* [00:20:19] Initial papers and intuitions around RLHF* [00:27:57] The three phases of RLHF* [00:31:09] Overfitting issues* [00:34:47] How preferences get defined* [00:40:35] Ballpark on LLaMA2 costs* [00:42:50] Synthetic data for training* [00:47:25] Technical deep dive in the RLHF process* [00:54:34] Projection / best event sampling* [00:57:49] Constitutional AI* [01:04:13] DPO* [01:08:54] What's the Allen Institute for AI?* [01:13:43] Benchmarks and models comparisonsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have Dr. Nathan Lambert in the house. Welcome.Nathan [00:00:18]: Thanks guys.Swyx [00:00:19]: You didn't have to come too far. You got your PhD in Berkeley, and it seems like you've lived there most of the time in recent years. You worked on robotics and model-based reinforcement learning on your PhD, and you also interned at FAIR and DeepMind. You bootstrapped the RLHF team at Hugging Face, and you recently joined the Allen Institute as a research scientist. So that's your quick bio. What should people know about you that maybe is not super obvious about you on New LinkedIn?Nathan [00:00:43]: I stay sane in various insane sport and ultra-endurance sport activities that I do.Swyx [00:00:50]: What's an ultra-endurance sport activity?Nathan [00:00:52]: Long-distance trail running or gravel biking. Try to unplug sometimes, although it's harder these days. Yeah.Swyx [00:00:59]: Well, you know, just the Bay Area is just really good for that stuff, right?Nathan [00:01:02]: Oh, yeah. You can't beat it. I have a trailhead like 1.2 miles from my house, which is pretty unmatchable in any other urban area.Swyx [00:01:11]: Pretty excellent. You also have an incredible blog, Interconnects, which I'm a fan of. And I also just recently discovered that you have a new podcast, Retort.Nathan [00:01:20]: Yeah, we do. I've been writing for a while, and I feel like I've finally started to write things that are understandable and fun. After a few years lost in the wilderness, if you ask some of my friends that I made read the earlier blogs, they're like, oh, this is yikes, but it's coming along. And the podcast is with my friend Tom, and we just kind of like riff on what's actually happening on AI and not really do news recaps, but just what it all means and have a more critical perspective on the things that really are kind of funny, but still very serious happening in the world of machine learning.Swyx [00:01:52]: Yeah. Awesome. So let's talk about your work. What would you highlight as your greatest hits so far on Interconnects, at least?Nathan [00:01:59]: So the ones that are most popular are timely and or opinion pieces. So the first real breakout piece was when April and I also just wrote down the thing that everyone in AI was feeling, which is we're all feeling stressed, that we're going to get scooped, and that we're overworked, which is behind the curtain, what it feels to work in AI. And then a similar one, which we might touch on later in this, was about my recent job search, which wasn't the first time I wrote a job search post. People always love that stuff. It's so open. I mean, it's easy for me to do in a way that it's very on-brand, and it's very helpful. I understand that until you've done it, it's hard to share this information. And then the other popular ones are various model training techniques or fine tuning. There's an early one on RLHF, which is, this stuff is all just like when I figure it out in my brain. So I wrote an article that's like how RLHF actually works, which is just the intuitions that I had put together in the summer about RLHF, and that was pretty well. And then I opportunistically wrote about QSTAR, which I hate that you have to do it, but it is pretty funny. From a literature perspective, I'm like, open AI publishes on work that is very related to mathematical reasoning. So it's like, oh, you just poke a little around what they've already published, and it seems pretty reasonable. But we don't know. They probably just got like a moderate bump on one of their benchmarks, and then everyone lost their minds. It doesn't really matter.Swyx [00:03:15]: You're like, this is why Sam Altman was fired. I don't know. Anyway, we're here to talk about RLHF 101. You did a presentation, and I think you expressed some desire to rerecord it. And that's why I reached out on Twitter saying, like, why not rerecord it with us, and then we can ask questions and talk about it. Yeah, sounds good.Nathan [00:03:30]: I try to do it every six or 12 months is my estimated cadence, just to refine the ways that I say things. And people will see that we don't know that much more, but we have a bit of better way of saying what we don't know.Swyx [00:03:43]: Awesome. We can dive right in. I don't know if there's any other topics that we want to lay out as groundwork.Alessio [00:03:48]: No, you have some awesome slides. So for people listening on podcast only, we're going to have the slides on our show notes, and then we're going to have a YouTube version where we run through everything together.Nathan [00:03:59]: Sounds good. Yeah. I think to start skipping a lot of the, like, what is a language model stuff, everyone knows that at this point. I think the quote from the Llama 2 paper is a great kind of tidbit on RLHF becoming like a real deal. There was some uncertainty earlier in the year about whether or not RLHF was really going to be important. I think it was not that surprising that it is. I mean, with recent models still using it, the signs were there, but the Llama 2 paper essentially reads like a bunch of NLP researchers that were skeptical and surprised. So the quote from the paper was, meanwhile, reinforcement learning known for its instability seemed a somewhat shadowy field for those in the NLP research community. However, reinforcement learning proved highly effective, particularly given its cost and time effectiveness. So you don't really know exactly what the costs and time that Meta is looking at, because they have a huge team and a pretty good amount of money here to release these Llama models. This is just the kind of thing that we're seeing now. I think any major company that wasn't doing RLHF is now realizing they have to have a team around this. At the same time, we don't have a lot of that in the open and research communities at the same scale. I think seeing that converge would be great, but it's still very early days. And the other thing on the slide is some of Anthropic's work, but everyone knows Anthropic is kind of the masters of this, and they have some of their own techniques that we're going to talk about later on, but that's kind of where we start.Alessio [00:05:17]: Can we do just a one-second RL version? So you come from a robotics background, which RL used to be, or maybe still is, state-of-the-art. And then now you're seeing a lot of LLM plus RL, so you have the gym fans, Eureka, you have MPU, which we had on the podcast when they started with RL. Now they're doing RL plus LLMs. Yeah. Any thoughts there on how we got here? Maybe how the pendulum will keep swinging?Nathan [00:05:46]: I really think RL is about a framing of viewing the world through trial and error learning and feedback, and really just one that's focused on thinking about decision-making and inputs in the world and how inputs have reactions. And in that, a lot of people come from a lot of different backgrounds, whether it's physics, electrical engineering, mechanical engineering. There are obviously computer scientists, but compared to other fields of CS, I do think it's a much more diverse background of people. My background was in electrical engineering and doing robotics and things like that. It really just changes the worldview. I think that reinforcement learning as it was back then, so to say, is really different. You're looking at these toy problems and the numbers are totally different, and everyone went kind of zero to one at scaling these things up, but people like Jim Phan and other people that were... You saw this transition in the decision transformer and papers and when people are trying to use transformers to do decision-making for things like offline RL, and I think that was kind of like the early days. But then once language models were so proven, it's like everyone is using this tool for their research. I think in the long run, it will still settle out, or RL will still be a field that people work on just because of these kind of fundamental things that I talked about. It's just viewing the whole problem formulation different than predicting text, and so there needs to be that separation. And the view of RL in language models is pretty contrived already, so it's not like we're doing real RL. I think the last slide that I have here is a way to make RLHF more like what people would think of with RL, so actually running things over time, but a weird lineage of tools that happen to get us to where we are, so that's why the name takes up so much space, but it could have gone a lot of different ways. Cool.Alessio [00:07:29]: We made it one slide before going on a tangent.Nathan [00:07:31]: Yeah, I mean, it's kind of related. This is a...Swyx [00:07:35]: Yeah, so we have a history of RL.Nathan [00:07:37]: Yeah, so to give the context, this paper really started because I have this more diverse background than some computer scientists, such as trying to understand what the difference of a cost function or a reward function and a preference function would be without going into all of the details. Costs are normally things that control theorists would work with in these kind of closed domains, and then reinforcement learning has always worked with rewards that's central to the formulation that we'll see, and then the idea was like, okay, we now are at preferences, and each step along the way there's kind of different assumptions that you're making. We'll get into these, and those assumptions are built on other fields of work. So that's what this slide is going to say, it's like RLHF, while directly building on tools from RL and language models, is really implicitly impacted and built on theories and philosophies spanning tons of human history. I think we cite Aristotle in this paper, which is fun. It's like going pre-BC, it's like 2,300 years old or something like that. So that's the reason to do this, I think. We kind of list some things in the paper about summarizing what different presumptions of RLHF could be. I think going through these is actually kind of funny. It's fun to talk about these, because they're kind of grab bags of things that you'll see return throughout this podcast that we're talking about it. The core thing of RLHF that, in order to be a believer in this, is that RL actually works. It's like, if you have a reward function, you can optimize it in some way and get a different performance out of it, and you could do this at scale, and you could do this in really complex environments, which is, I don't know how to do that in all the domains. I don't know how to exactly make chat GPT. So it's kind of, we'll overshadow everything. And then there's, go from something kind of obvious like that, and then you read the von Neumann-Morgenstern utility theorem, which is essentially an economic theory that says you can weight different probabilities of different people, which is a theoretical piece of work that is the foundation of utilitarianism, and trying to quantify preferences is crucial to doing any sort of RLHF. And if you look into this, all of these things, there's way more you could go into if you're interested in any of these. So this is kind of like grabbing a few random things, and then kind of similar to that is the Bradley-Terry model, which is the fancy name for the pairwise preferences that everyone is doing. And then all the things that are like, that Anthropic and OpenAI figured out that you can do, which is that you can aggregate preferences from a bunch of different people and different sources. And then when you actually do RLHF, you extract things from that data, and then you train a model that works somehow. And we don't know, there's a lot of complex links there, but if you want to be a believer in doing this at scale, these are the sorts of things that you have to accept as preconditions for doing RLHF. Yeah.Swyx [00:10:09]: You have a nice chart of like the sort of intellectual history of RLHF that we'll send people to refer to either in your paper or in the YouTube video for this podcast. But I like the other slide that you have on like the presumptions that you need to have for RLHF to work. You already mentioned some of those. Which one's underappreciated? Like, this is the first time I've come across the VNM Utility Theorem.Nathan [00:10:29]: Yeah, I know. This is what you get from working with people like to my co-host on the podcast, the rhetoric is that sociologist by training. So he knows all these things and like who the philosophers are that found these different things like utilitarianism. But there's a lot that goes into this. Like essentially there's even economic theories that like there's debate whether or not preferences exist at all. And there's like different types of math you can use with whether or not you actually can model preferences at all. So it's pretty obvious that RLHF is built on the math that thinks that you can actually model any human preference. But this is the sort of thing that's been debated for a long time. So all the work that's here is like, and people hear about in their AI classes. So like Jeremy Bentham, like hedonic calculus and all these things like these are the side of work where people assume that preferences can be measured. And this is like, I don't really know, like, this is what I kind of go on a rant and I say that in RLHF calling things a preference model is a little annoying because there's no inductive bias of what a preference is. It's like if you were to learn a robotic system and you learned a dynamics model, like hopefully that actually mirrors the world in some way of the dynamics. But with a preference model, it's like, Oh my God, I don't know what this model, like I don't know what chat GPT encodes as any sort of preference or what I would want it to be in a fair way. Anthropic has done more work on trying to write these things down. But even like if you look at Claude's constitution, like that doesn't mean the model believes these things. It's just trained to prioritize these things. And that's kind of what the later points I'm looking at, like what RLHF is doing and if it's actually like a repeatable process in the data and in the training, that's just unknown. And we have a long way to go before we understand what this is and the link between preference data and any notion of like writing down a specific value.Alessio [00:12:05]: The disconnect between more sociology work versus computer work already exists, or is it like a recent cross contamination? Because when we had Tri Dao on the podcast, he said FlashAttention came to be because at Hazy they have so much overlap between systems engineer and like deep learning engineers. Is it the same in this field?Nathan [00:12:26]: So I've gone to a couple of workshops for the populations of people who you'd want to include this like R. I think the reason why it's not really talked about is just because the RLHF techniques that people use were built in labs like OpenAI and DeepMind where there are some of these people. These places do a pretty good job of trying to get these people in the door when you compare them to like normal startups. But like they're not bringing in academics from economics, like social choice theory. There's just too much. Like the criticism of this paper that this is based on is like, oh, you're missing these things in RL or at least this decade of RL and it's like it would be literally be bigger than the Sutton and Barto book if you were to include everyone. So it's really hard to include everyone in a principled manner when you're designing this. It's just a good way to understand and improve the communication of what RLHF is and like what is a good reward model for society. It really probably comes down to what an individual wants and it'll probably motivate models to move more in that direction and just be a little bit better about the communication, which is a recurring theme and kind of my work is like I just get frustrated when people say things that don't really make sense, especially when it's going to manipulate individual's values or manipulate the general view of AI or anything like this. So that's kind of why RLHF is so interesting. It's very vague in what it's actually doing while the problem specification is very general.Swyx [00:13:42]: Shall we go to the, I guess, the diagram here on the reinforcement learning basics? Yeah.Nathan [00:13:47]: So reinforcement learning, I kind of mentioned this, it's a trial and error type of system. The diagram and the slides is really this classic thing where you have an agent interacting with an environment. So it's kind of this agent has some input to the environment, which is called the action. The environment returns a state and a reward and that repeats over time and the agent learns based on these states and these rewards that it's seeing and it should learn a policy that makes the rewards go up. That seems pretty simple than if you try to mentally map what this looks like in language, which is that like the language models don't make this easy. I think with the language model, it's very hard to define what an environment is. So if the language model is the policy and it's generating, it's like the environment should be a human, but setting up the infrastructure to take tens of thousands of prompts and generate them and then show them to a human and collect the human responses and then shove that into your training architecture is very far away from working. So we don't really have an environment. We just have a reward model that returns a reward and the state doesn't really exist when you look at it like an RL problem. What happens is the state is a prompt and then you do a completion and then you throw it away and you grab a new prompt. We're really in as an RL researcher, you would think of this as being like you take a state, you get some completion from it and then you look at what that is and you keep kind of iterating on it and all of that isn't here, which is why you'll hear RLHF referred to as bandits problem, which is kind of like you choose one action and then you watch the dynamics play out. There's many more debates that you can have in this. If you get the right RL people in the room, then kind of like this is an RL even when you zoom into what RLHF is doing.Alessio [00:15:22]: Does this change as you think about a chain of thought reasoning and things like that? Like does the state become part of the chain that you're going through?Nathan [00:15:29]: There's work that I've mentioned on one slide called process reward models that essentially rewards each step in the chain of thought reasoning. It doesn't really give the part of interaction, but it does make it a little bit more fine grained where you can think about like calling it at least you have many states from your initial state. That formulation I don't think people have fully settled on. I think there's a bunch of great work out there, like even OpenAI is releasing a lot of this and let's verify step by step is there pretty great paper on the matter. I think in the next year that'll probably get made more concrete by the community on like if you can easily draw out like if chain of thought reasoning is more like RL, we can talk about that more later. That's a kind of a more advanced topic than we probably should spend all the time on.Swyx [00:16:13]: RLHF for decision making. You have a slide here that compares pre-deep RL versus deep RL.Nathan [00:16:19]: This is getting into the history of things, which is showing that the work that people are using now really came from well outside of NLP and it came before deep learning was big. Next up from this paper, Tamer, which is from 2008. Some names that are still really relevant in kind of human centric RL, Bradley Knox and Peter Stone. If you have an agent take an action, you would just have a human give a score from zero to one as a reward rather than having a reward function. And then with that classifier, you can do something with a policy that learns to take actions to maximize that reward. It's a pretty simple setup. It works in simple domains. And then the reason why this is interesting is you compare it to the paper that everyone knows, which is this Paul Christiano et al. Deep Reinforced Learning from Human Preferences paper, which is where they showed that learning from human preferences, you can solve like the basic RL tasks at the time. So various control problems and simulation and this kind of like human preferences approach had higher rewards in some environments than if you just threw RL at the environment that returned a reward. So the preferences thing was you took two trajectories. So in this case, it was like complete trajectories of the agent and the human was labeling which one is better. You can see how this kind of comes to be like the pairwise preferences that are used today that we'll talk about. And there's also a really kind of interesting nugget that is the trajectory that the humans were labeling over has a lot more information than the RL algorithm would see if you just had one state, which is kind of why people think that it's why the performance in this paper was so strong. But I still think that it's surprising that there isn't more RL work of this style happening now. This paper is in 2017. So it's like six years later and I haven't seen things that are exactly similar, but it's a great paper to understand where stuff that's happening now kind of came from.Swyx [00:17:58]: Just on the Christiano paper, you mentioned the performance being strong. I don't remember what results should I have in mind when I think about that paper?Nathan [00:18:04]: It's mostly like if you think about an RL learning curve, which is like on the X axis, you have environment interactions on the Y axis, you have performance. You can think about different like ablation studies of between algorithms. So I think they use like A2C, which I don't even remember what that stands for as their baseline. But if you do the human preference version on a bunch of environments, like the human preference labels, the agent was able to learn faster than if it just learned from the signal from the environment, which means like it's happening because the reward model has more information than the agent would. But like the fact that it can do better, I was like, that's pretty surprising to me because RL algorithms are pretty sensitive. So I was like, okay.Swyx [00:18:41]: It's just one thing I do want to establish as a baseline for our listeners. We are updating all the weights. In some sense, the next token prediction task of training a language model is a form of reinforcement learning. Except that it's not from human feedback. It's just self-supervised learning from a general corpus. There's one distinction which I love, which is that you can actually give negative feedback. Whereas in a general sort of pre-training situation, you cannot. And maybe like the order of magnitude of feedback, like the Likert scale that you're going to talk about, that actually just gives more signal than a typical training process would do in a language model setting. Yeah.Nathan [00:19:15]: I don't think I'm the right person to comment exactly, but like you can make analogies that reinforcement learning is self-supervised learning as well. Like there are a lot of things that will point to that. I don't know whether or not it's a richer signal. I think that could be seen in the results. It's a good thing for people to look into more. As reinforcement learning is so much less compute, like it is a richer signal in terms of its impact. Because if they could do what RLHF is doing at pre-training, they would, but they don't know how to have that effect in like a stable manner. Otherwise everyone would do it.Swyx [00:19:45]: On a practical basis, as someone fine-tuning models, I have often wished for negative fine-tuning, which pretty much doesn't exist in OpenAI land. And it's not the default setup in open-source land.Nathan [00:19:57]: How does this work in like diffusion models and stuff? Because you can give negative prompts to something to like stable diffusion or whatever. It's for guidance.Swyx [00:20:04]: That's for clip guidance.Nathan [00:20:05]: Is that just from like how they prompt it then? I'm just wondering if we could do something similar. It's another tangent.Swyx [00:20:10]: I do want to sort of spell that out for people in case they haven't made the connection between RLHF and the rest of the training process. They might have some familiarity with it.Nathan [00:20:19]: Yeah. The upcoming slides can really dig into this, which is like this in 2018 paper, there was a position paper from a bunch of the same authors from the Christiano paper and from the OpenAI work that everyone knows, which is like, they write a position paper on what a preference reward model could do to solve alignment for agents. That's kind of based on two assumptions. The first assumption is that we can learn user intentions to a sufficiently high accuracy. That doesn't last with me because I don't know what that means. But the second one is pretty telling in the context of RLHF, which is for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. And this is the whole thing. It's like we can compare two poems that the model generates and it can be viewed as liking a positive example, or it could be viewed as really disliking a negative example. And that's what I think a lot of people are doing in like the harm space is like a harmful response to a language model, whether or not you agree with the company's definition of harms is that it's a really bad negative example and they downweight them by preferring something more benign in the RLHF process, among other ways of dealing with safety. So that's a good way of saying it's like this is core, this kind of like comparison and positive or negative example is core to all of the RLHF work that has continued.Swyx [00:21:29]: People often say, I don't know what I want, but I'll know when I see it. This is that expressed in reinforcement learning tools.Nathan [00:21:35]: Yeah, it is. Yeah, it is. That's what everyone's doing in the preference modeling stage that we'll get to. Yeah. Yeah. And you can see there are more papers. This is really just to have all the links for people that go deeper. There's a Ziegler et al. paper in 2019, which shows that you can do this RLHF process on language models. This familiar diagram starts to emerge in 2019, and it's just to show that this goes really far back. I think we can kind of breeze through some of these. And then 2020 is the first open AI experiment that I think caught people's eyes, which is this learning to summarize experiment. It has this three-step process that we'll go to into more when I kind of go into the main concepts. But this is like the first time you see this diagram that they reuse with InstructGPT, they reuse with ChatGPT. And the types of examples that they would have, I don't think I need to read these exactly, but one that I have read a whole bunch of times is like, they took these prompts from Reddit that was like, explain like I'm five or get career advice, and people really pour their heart and soul into these. So these are like multi-paragraph pieces of writing. And then they essentially do comparisons between a vanilla language model, like I think it was either GPT-2 or GPT-3, I don't always get the exact years.Swyx [00:22:42]: 3 was early 2020. So that's about right.Nathan [00:22:45]: Yeah. So this is probably done with GPT-2. It doesn't really matter. But the language model does normal things when you do few shot, which is like it repeats itself. It doesn't have nice text. And what they did is that this was the first time where the language model would generate like pretty nice text from an output. It was restricted to the summarization domain. But I think that I guess this is where I wish I was paying attention more because I would see the paper, but I didn't know to read the language model outputs and kind of understand this qualitative sense of the models very well then. Because you look at the plots in the papers, these Learning to Summarize and Destruct GPT have incredibly pretty plots, just like nicely separated lines with error bars and they're like superfine tuning works, the RL step works. But if you were early to see like how different the language that was written by these models was, I think you could have been early to like things like ChatGPT and knowing RLHF would matter. And now I think the good people know to chat with language models, but not even everyone does this. Like people are still looking at numbers. And I think OpenAI probably figured it out when they were doing this, how important that could be. And then they had years to kind of chisel away at that and that's why they're doing so well now. Yeah.Swyx [00:23:56]: I mean, arguably, you know, it's well known that ChatGPT was kind of an accident that they didn't think it would be that big of a deal. Yeah.Nathan [00:24:02]: So maybe they didn't. Maybe they didn't, but they were getting the proxy that they needed.Swyx [00:24:06]: I've heard off the record from other labs that it was in the air. If OpenAI didn't do it, someone else would have done it. So you've mentioned a couple of other papers that are very seminal to this period. And I love how you say way back when in referring to 2019.Nathan [00:24:19]: It feels like it in my life.Swyx [00:24:21]: So how much should people understand the relationship between RLHF, instruction tuning, PPO, KL divergence, anything like that? Like how would you construct the level of knowledge that people should dive into? What should people know at the high level? And then if people want to dive in deeper, where do they go? Is instruct tuning important here or is that part of the overall process towards modern RLHF?Nathan [00:24:44]: I think for most people, instruction tuning is probably still more important in their day to day life. I think instruction tuning works very well. You can write samples by hand that make sense. You can get the model to learn from them. You could do this with very low compute. It's easy to do almost in like no code solutions at this point. And the loss function is really straightforward. And then if you're interested in RLHF, you can kind of learn from it from a different perspective, which is like how the instruction tuning distribution makes it easier for your RLHF model to learn. There's a lot of details depending on your preference data, if it's close to your instruction model or not, if that matters. But that's really at the RLHF stage. So I think it's nice to segment and just kind of understand what your level of investment and goals are. I think instruction tuning still can do most of what you want to do. And it's like, if you want to think about RLHF, at least before DPO really had taken off at all, it would be like, do you want to have a team of at least like five people if you're really thinking about doing RLHF? I think DPO makes it a little bit easier, but that's still really limited to kind of one data set that everyone's using at this point. Like everyone's using this ultra feedback data set and it boosts AlpacaVal, MTBench, TruthfulQA and like the qualitative model a bit. We don't really know why. It's like, it might just be a data set combined with the method, but you've got to be ready for a bumpy ride if you're wanting to try to do RLHF. I don't really recommend most startups to do it unless it's like going to provide them a clear competitive advantage in their kind of niche, because you're not going to make your model chat GPT like better than OpenAI or anything like that. You've got to accept that there's some exploration there and you might get a vein of benefit in your specific domain, but I'm still like, oh, be careful going into the RLHF can of worms. You probably don't need to.Swyx [00:26:27]: Okay. So there's a bit of a time skip in what you mentioned. DPO is like a couple months old, so we'll leave that towards the end. I think the main result that I think most people talk about at this stage, we're talking about September 2020 and then going into, I guess maybe last year was Vicuña as one of the more interesting applications of instruction tuning that pushed LLAMA1 from, let's say a GPT 3-ish model to a GPT 3.5 model in pure open source with not a lot of resources. I think, I mean, they said something like, you know, they use like under $100 to makeNathan [00:26:58]: this. Yeah. Like instruction tuning can really go a long way. I think the claims of chat GPT level are long overblown in most of the things in open source. I think it's not to say, like Vicuña was a huge step and it's just kind of showing that instruction tuning with the right data will completely change what it feels like to talk with your model. Yeah.Swyx [00:27:19]: From text completion to actually chatting back and forth. Yeah. Yeah.Nathan [00:27:23]: Instruction tuning can be multi-turn. Just having a little bit of data that's like a couple of turns can go a really long way. That was like the story of the whole first part of the year is like people would be surprised by how far you can take instruction tuning on a small model. I think the things that people see now is like the small models don't really handle nuance as well and they could be more repetitive even if they have really good instruction tuning. But if you take that kind of 7 to 70 billion parameter jump, like the instruction tuning at the bigger model is like robustness, little things make more sense. So that's still just with instruction tuning and scale more than anything else.Swyx [00:27:56]: Excellent. Shall we go to technical overview?Nathan [00:27:58]: Yeah. This is kind of where we go through my own version of this like three phase process. You can talk about instruction tuning, which we've talked about a lot. It's funny because all these things, instruction tuning has the fewest slides, even though it's the most practical thing for most people. We could save the debate for like if the big labs still do instruction tuning for later, but that's a coming wave for people. And then like preference data and training and then kind of like what does reinforce learning optimization actually mean? We talk about these sequentially because you really have to be able to do each of them to be able to do the next one. You need to be able to have a model that's chatty or helpful instruction following. Every company has their own word that they like to assign to what instructions mean. And then once you have that, you can collect preference data and do some sort of optimization.Swyx [00:28:39]: When you say word, you mean like angle bracket inst or do you mean something else?Nathan [00:28:42]: Oh, I don't even know what inst means, but just saying like they use their adjective that they like. I think Entropic also like steerable is another one.Swyx [00:28:51]: Just the way they describe it. Yeah.Nathan [00:28:53]: So like instruction tuning, we've covered most of this is really about like you should try to adapt your models to specific needs. It makes models that were only okay, extremely comprehensible. A lot of the times it's where you start to get things like chat templates. So if you want to do system prompts, if you want to ask your model, like act like a pirate, that's one of the ones I always do, which is always funny, but like whatever you like act like a chef, like anything, this is where those types of things that people really know in language models start to get applied. So it's good as a kind of starting point because this chat template is used in our early childhood and all of these things down the line, but it was a basic pointer. It's like, once you see this with instruction tuning, you really know it, which is like you take things like stack overflow where you have a question and an answer. You format that data really nicely. There's much more tricky things that people do, but I still think the vast majority of it is question answer. Please explain this topic to me, generate this thing for me. That hasn't changed that much this year. I think people have just gotten better at scaling up the data that they need. Yeah, this is where this talk will kind of take a whole left turn into more technical detail land. I put a slide with the RLHF objective, which I think is good for people to know. I've started going back to this more, just kind of understand what is trying to happen here and what type of math people could do. I think because of this algorithm, we've mentioned this, it's in the air, direct preference optimization, but everything kind of comes from an equation of trying to learn a policy that maximizes the reward. The reward is some learned metric. A lot can be said about what the reward should be subject to some constraint. The most popular constraint is the KL distraint, which is just a distributional distance. Essentially in language models, that means if you have a completion from your instruction or RLHF model, you can compare that completion to a base model. And looking at the log probs from the model, which are essentially how likely each token is, you can see a rough calculation of the distance between these two models, just as a scalar number. I think what that actually looks like in code, you can look at it. It'd be like a sum of log probs that you get right from the model. It'll look much more simpler than it sounds, but it is just to make the optimization kind of stay on tracks.Make sure it doesn't overfit to the RLHF data. Because we have so little data in RLHF, overfitting is really something that could happen. I think it'll fit to specific features that labelers like to see, that the model likes to generate, punctuation, weird tokens like calculator tokens. It could overfit to anything if it's in the data a lot and it happens to be in a specific format. And the KL constraint prevents that. There's not that much documented work on that, but there's a lot of people that know if you take that away, it just doesn't work at all. I think it's something that people don't focus on too much. But the objective, as I said, it's just kind of, you optimize the reward. The reward is where the human part of this comes in. We'll talk about that next. And then subject to a constraint, don't change the model too much. The real questions are, how do you implement the reward? And then how do you make the reward go up in a meaningful way? So like a preference model, the task is kind of to design a human reward. I think the equation that most of the stuff is based on right now is something called a Bradley-Terry model, which is like a pairwise preference model where you compare two completions and you say which one you like better. I'll show an interface that Anthropic uses here. And the Bradley-Terry model is really a fancy probability between two selections. And what's happening in the math is that you're looking at the probability that the chosen completion, the one you like better, is actually the better completion over the rejected completion. And what these preference models do is they assume this probability is correlated to reward. So if you just sample from this probability, it'll give you a scalar. And then you use that reward later on to signify what piece of text is better. I'm kind of inclined to breeze through the math stuff because otherwise, it's going to be not as good to listen to.Alessio [00:32:49]: I think people want to hear it. I think there's a lot of higher level explanations out there. Yeah.Nathan [00:32:55]: So the real thing is you need to assign a scalar reward of how good a response is. And that's not necessarily that easy to understand. Because if we take back to one of the first works, I mentioned this tamer thing for decision making. People tried that with language models, which is if you have a prompt in a completion and you just have someone rate it from 0 to 10, could you then train a reward model on all of these completions in 0 to 10 ratings and see if you can get chat2BT with that? And the answer is really kind of no. Like a lot of people tried that. It didn't really work. And then that's why they tried this pairwise preference thing. And it happened to work. And this Bradley Terry model comes from the 50s. It's from these fields that I was mentioning earlier. And it's wild how much this happens. I mean, this screenshot I have in the slides is from the DPO paper. I think it might be the appendix. But it's still really around in the literature of what people are doing for RLHF.Alessio [00:33:45]: Yeah.Nathan [00:33:45]: So it's a fun one to know.Swyx [00:33:46]: I'll point out one presumption that this heavily relies on. You mentioned this as part of your six presumptions that we covered earlier, which is that you can aggregate these preferences. This is not exactly true among all humans, right? I have a preference for one thing. You have a preference for a different thing. And actually coming from economics, you mentioned economics earlier. There's a theorem or a name for this called error impossibility, which I'm sure you've come across..Nathan [00:34:07]: It's one of the many kind of things we throw around in the paper.Swyx [00:34:10]: Right. Do we just ignore it?Nathan [00:34:14]: We just, yeah, just aggregate. Yeah. I think the reason this really is done on a deep level is that you're not actually trying to model any contestable preference in this. You're not trying to go into things that are controversial or anything. It's really the notion of preference is trying to stay around correctness and style rather than any meaningful notion of preference. Because otherwise these companies, they don't want to do this at all. I think that's just how it is. And it's like, if you look at what people actually do. So I have a bunch of slides on the feedback interface. And they all publish this.Swyx [00:34:43]: It's always at the appendices of every paper.Nathan [00:34:47]: There's something later on in this talk, which is like, but it's good to mention. And this is when you're doing this preference collection, you write out a very long document of instructions to people that are collecting this data. And it's like, this is the hierarchy of what we want to prioritize. Something amount like factuality, helpfulness, honestness, harmlessness. These are all different things. Every company will rank these in different ways, provide extensive examples. It's like, if you see these two answers, you should select this one and why. And all of this stuff. And then my kind of like head scratching is like, why don't we check if the models actually do these things that we tell the data annotators to collect? But I think it's because it's hard to make that attribution. And it's hard to test if a model is honest and stuff. It would just be nice to understand the kind of causal mechanisms as a researcher or like if our goals are met. But at a simple level, what it boils down to, I have a lot more images than I need. It's like you're having a conversation with an AI, something like type GPT. You get shown two responses or more in some papers, and then you have to choose which one is better. I think something you'll hear a lot in this space is something called a Likert scale. Likert is a name. It's a name for probably some research in economics, decision theory, something. But essentially, it's a type of scale where if you have integers from like one to eight, the middle numbers will represent something close to a tie. And the smallest numbers will represent one model being way better than the other. And the biggest numbers will be like the other models better. So in the case of one to eight, if you're comparing models A to B, if you return a one, if you really liked option A, you return eight if you really like B, and then like a four or five if they were close. There's other ways to collect this data. This one's become really popular. We played with it a bit at Hugging Face. It's hard to use. Filling out this preference data is really hard. You have to read like multiple paragraphs. It's not for me. Some people really like it. I hear I'm like, I can't imagine sitting there and reading AI-generated text and like having to do that for my job. But a lot of these early papers in RLHF have good examples of what was done. The one I have here is from Anthropic's collection demo because it was from slides that I did with Anthropic. But you can look up these in the various papers. It looks like Chat2BT with two responses, and then you have an option to say which one is better. It's nothing crazy. The infrastructure is almost exactly the same, but they just log which one you think is better. I think places like Scale are also really big in this where a lot of the labeler companies will help control like who's doing how many samples. You have multiple people go over the same sample once and like what happens if there's disagreement. I don't really think this disagreement data is used for anything, but it's good to know like what the distribution of prompts is, who's doing it, how many samples you have, controlling the workforce. All of this is very hard. A last thing to add is that a lot of these companies do collect optional metadata. I think the Anthropic example shows a rating of like how good was the prompt or the conversation from good to bad because things matter. Like there's kind of a quadrant of preference data in my mind, which is you're comparing a good answer to a good answer, which is like really interesting signal. And then there's kind of the option of you're comparing a bad answer to a bad answer, which is like you don't want to train your model on two different issues. This is like, we did this at Hugging Base and it was like, our data was like, we don't know if we can use this because a lot of it was just bad answer to bad answer because you're like rushing to try to do this real contract. And then there's also good answer to bad answer, which I think is probably pretty reasonable to include. You just prefer the good one and move on with your life. But those are very different scenarios. I think open AIs of the world are all in good answer, good answer, and have learned to eliminate everything else. But when people try to do this in open source, it's probably like what Open Assistance saw is like, there's just a lot of bad answers in your preference data. And you're like, what do I do with this? Metadata flags can help. I threw in the instruct GPT metadata. You can see how much they collect here. And like everything from the model fails to actually complete the task, hallucinations, different types of offensive or dangerous content, moral judgment, expresses opinion. Like, I don't know exactly if they're doing this now, but you can kind of see why doing RLHF at scale and prioritizing a lot of different endpoints would be hard because these are all things I'd be interested in if I was scaling up a big team to do RLHF and like what is going into the preference data. You do an experiment and you're like, okay, we're going to remove all the data where they said the model hallucinates like just that and then retrain everything. Like, what does that do?Swyx [00:38:59]: Yeah, so hallucination is big, but some of these other metadata categories, and I've seen this in a lot of papers, it's like, does it contain sexual content? Does it express a moral judgment? Does it denigrate a protected class? That kind of stuff, very binary. Should people try to adjust for this at the RLHF layer or should they put it as a pipeline where they have a classifier as a separate model that grades the model output?Nathan [00:39:20]: Do you mean for training or like a deployment? Deployment. I do think that people are doing it at deployment. I think we've seen safety and other things in the RLHF pipeline. Like Lama 2 is famous for kind of having this like helpfulness and safety reward models. Deep in the Gemini report is something that Gemini has like four things, which is like helpfulness, factuality, maybe safety, maybe something else. But places like Anthropic and Chattopadhyay and Bard almost surely have a classifier after, which is like, is this text good? Is this text bad? That's not that surprising, I think, because you could use like a hundred times smaller language model and do much better at filtering than RLHF. But I do think it's still so deeply intertwined with the motivation of RLHF to be for safety that some of these categories still persist. I think that's something I'll kind of settle out, I think.Swyx [00:40:11]: I'm just wondering if it's worth collecting this data for the RLHF purpose, if you're not going to use it in any way, separate model to-Nathan [00:40:18]: Yeah, I don't think OpenAI will collect all of this anymore, but I think for research perspectives, it's very insightful to know, but it's also expensive. So essentially your preference data scales with how many minutes it takes for you to do each task and every button is like, it scales pretty linearly. So it's not cheap stuff.Swyx [00:40:35]: Can we, since you mentioned expensiveness, I think you may have joined one of our spaces back in Lama 2 was released. We had an estimate from you that was something on the order of Lama 2 costs $3 to $6 million to train GPU-wise, and then it was something like $20 to $30 million in preference data. Is that something that's still in the ballpark? I don't need precise numbers.Nathan [00:40:56]: I think it's still a ballpark. I know that the 20 million was off by a factor of four because I was converting from a prompt number to a total data point. So essentially when you do this, if you have multi-turn setting, each turn will be one data point and the Lama 2 paper reports like 1.5 million data points, which could be like 400,000 prompts. So I would say it's still say like 6 to 8 million is safe to say that they're spending, if not more, they're probably also buying other types of data and or throwing out data that they don't like, but it's very comparable to compute costs. But the compute costs listed in the paper always are way lower because all they have to say is like, what does one run cost? But they're running tens or hundreds of runs. So it's like, okay, like... Yeah, it's just kind of a meaningless number. Yeah, the data number would be more interesting.Alessio [00:41:42]: What's the depreciation of this data?Nathan [00:41:46]: It depends on the method. Like some methods, people think that it's more sensitive to the, this is what I was saying. It was like, does the type of instruction tuning you do matter for RLHF? So like, depending on the method, some people are trying to figure out if you need to have like what is called like, this is very confusing. It's called like on policy data, which is like your RLHF data is from your instruction model. I really think people in open source and academics are going to figure out how to use any preference data on any model just because they're scrappy. But there's been an intuition that to do like PPO well and keep improving the model over time and do like what Meta did and what people think that OpenAI does is that you need to collect new preference data to kind of edge the distribution of capabilities forward. So there's a depreciation where like the first batch of data you collect isn't really useful for training the model when you have the fifth batch. We don't really know, but it's a good question. And I do think that if we had all the LLAMA data, we wouldn't know what to do with all of it. Like probably like 20 to 40% would be pretty useful for people, but not the whole data set. Like a lot of it's probably kind of gibberish because they had a lot of data in there.Alessio [00:42:51]: So do you think like the open source community should spend more time figuring out how to reuse the data that we have or like generate more data? I think that's one of the-Nathan [00:43:02]: I think if the people are kind of locked into using synthetic data, people also think that synthetic data is like GPT-4 is more accurate than humans at labeling preferences. So if you look at these diagrams, like humans are about 60 to 70% agreement. And we're like, that's what the models get to. And if humans are about 70% agreement or accuracy, like GPT-4 is like 80%. So it is a bit better, which is like in one way of saying it.Swyx [00:43:24]: Humans don't even agree with humans 50% of the time.Nathan [00:43:27]: Yeah, so like that's the thing. It's like the human disagreement or the lack of accuracy should be like a signal, but how do you incorporate that? It's really tricky to actually do that. I think that people just keep using GPT-4 because it's really cheap. It's one of my like go-to, like I just say this over and over again is like GPT-4 for data generation, all terms and conditions aside because we know OpenAI has this stuff is like very cheap for getting pretty good data compared to compute or salary of any engineer or anything. So it's like tell people to go crazy generating GPT-4 data if you're willing to take the organizational like cloud of should we be doing this? But I think most people have accepted that you kind of do this, especially at individuals. Like they're not gonna come after individuals. I do think more companies should think twice before doing tons of OpenAI outputs. Also just because the data contamination and what it does to your workflow is probably hard to control at scale.Swyx [00:44:21]: And we should just mention at the time of recording, we've seen the first example of OpenAI enforcing their terms of service. ByteDance was caught, reported to be training on GPT-4 data and they got their access to OpenAI revoked. So that was one example.Nathan [00:44:36]: Yeah, I don't expect OpenAI to go too crazy on this cause they're just gonna, there's gonna be so much backlash against them. And like, everyone's gonna do it anyways.Swyx [00:44:46]: And what's at stake here to spell it out is like, okay, that's like cost $10 to collect one data point from a human. It's gonna cost you like a 10th of a cent with OpenAI, right? So like it's just orders of magnitude cheaper. And therefore people-Nathan [00:44:58]: Yeah, and it's like the signal you get from humans is from preferences isn't that high. The signal that you get from humans for instructions is pretty high, but it is also very expensive. So like the human instructions are definitely like by far and away the best ones out there compared to the synthetic data. But I think like the synthetic preferences are just so much easier to get some sort of signal running with and you can work in other, I think people will start working in other goals there between safety and whatever. That's something that's taking off and we'll kind of see that. I think in 2024, at some point, people will start doing things like constitutional AI for preferences, which will be pretty interesting. I think we saw how long it took RLHF to get started in open source. Instruction tuning was like the only thing that was really happening until maybe like August, really. I think Zephyr was the first model that showed success with RLHF in the public, but that's a long time from everyone knowing that it was something that people are interested in to having any like check mark. So I accept that and think the same will happen with constitutional AI. But once people show that you can do it once, they continue to explore.Alessio [00:46:01]: Excellent.Swyx [00:46:01]: Just in the domain of human preference data suppliers, Scale.ai very happily will tell you that they supplied all that data for Lama 2. The other one is probably interesting, LMSYS from Berkeley. What they're running with Chaterina is perhaps a good store of human preference data.Nathan [00:46:17]: Yeah, they released some toxicity data. They, I think, are generally worried about releasing data because they have to process it and make sure everything is safe and they're really lightweight work. I think they're trying to release the preference data. I have, if we make it to evaluation, I'd pretty much say that Chaterina is the best limited evaluation that people have to learn how to use language models. And like, it's very valuable data. They also may share some data with people that they host models from. So like if your model is hosted there and you pay for the hosting, you can get the prompts because you're pointing the endpoint at it and that gets pinged to you and you're any real LLM inference stack saves the prompts tha
Brain disease worsening, Rick Cahill risks everything—even his life—to provide for his fractured family's futureSan Diego private investigator Rick Cahill's past comes back to haunt him when he's at his most vulnerable. His wife, Leah, has fled with their daughter, Krista, to her parents' home in Santa Barbara. She fears Rick's violent outbursts brought on by his potentially fatal brain disorder, CTE—and she doesn't trust that he'll ever be able to tame his manic desire to bring his own brand of justice to an unjust world.Rick desperately wants to reunite his family and help provide for Krista's future—one he fears he won't be alive to see. A jumpstart toward that future appears in the form of Peter Stone, Rick's longtime enemy. Stone offers Rick $50,000 to find a woman he claims can save his life with a kidney transplant. Rick can't pass up the chance to buttress Krista's future.When what seems like a simple missing person case spirals out of control into cryptocurrency machinations, dead bodies, and an outgunned faceoff, Rick is forced to battle evil from his past. Can he stay alive long enough to see his family one last time?Support this show http://supporter.acast.com/houseofmysteryradio. Become a member at https://plus.acast.com/s/houseofmysteryradio. Hosted on Acast. See acast.com/privacy for more information.
On episode #109 of Mares in Black, we talk with the wonderful crew that has taken on the responsibility of updating and maintaining Identify Your Breyer (IDYB) after the untimely passing of founder Janice Cox. The talk through the roles and responsibilities of the crew, the processes for keeping the database current, and the vision for the future of the sites. All the model horse news news and IG in progress is also in this drop! Something Elated by Broke For Free is licensed under a Attribution 3.0 United States License.
Today in the ArtZany Radio studio Paula Granquist welcomes guests from the Northfield Arts Guild; First from the musical comedy Sweet Charity director Marc Robinson and performer Sharon Lane-Getaz and then from the Cannon Valley Regional Orchestra concert Atmospheres conductor Paul Niemisto and harpist Elinor Niemisto. Sweet Charity by Peter Stone and Neil Simon. Director: Marc Robinson, Musical Director: Dan Kallman, Choreographer: Shari Setchell. […]
In this episode, performer and producer Andrea Prestinario discusses Jeanine Tesori and Lisa Kron's 2015 musical Fun Home. We also talk about the song "He Plays the Violin" from Sherman Edwards and Peter Stone's 1969 musical 1776. You can write to scenetosong@gmail.com with a comment or question about an episode or about musical theater, or if you'd like to be a podcast guest. Follow on Instagram at @ScenetoSong, on X/Twitter at @SceneSong, and on Facebook at “Scene to Song with Shoshana Greenberg Podcast.” And be sure to sign up for the new monthly e-newsletter at scenetosong.substack.com. Contribute to the Patreon. The theme music is by Julia Meinwald. Music played in this episode: "Days and Days" from Fun Home "Telephone Wire" from Fun Home "Maps" from Fun Home "Edges of the World" from Fun Home "Ring of Keys" from Fun Home "He Plays the Violin" from 1776
Peter Stone is the founder of The Sovereign Project which is an institution that protects and reclaims the rights and freedom of each individual by providing powerful tools and education, while uniting others who also choose to be free. He has been on previous Episodes #129 #141 #181 ======= Thanks to my Sponsors for Helping Support me: If you or know some body you know is struggling with anxiety and want to know how to be 100% anxiety free, in 6 weeks, without therapy or drugs, fully guaranteed - then let me tell you about our sponsor Daniel Packard. His research company spent 8 years testing to develop an innovative process that solves your anxiety permanently in just 6 weeks - with an astounding 90% success rate. Because their program is so effective, people who join their program only pay at the end, once they have clear, measurable results. If you're interested in solving your anxiety in 6 weeks - fully guaranteed - and you want to learn more and have a free consultation with Daniel, go to https://www.danielpackard.com/ -------------------------- Do you have High Blood Pressure and/ or want to get off the Meds Doctors are amazed at what the Zona Plus can do $50 Discount with my Code ROY https://www.zona.com/discount/ROY —----- Quality Polish manufacturer of Metal Products for Telecommunication + workshop equipment and other metal articles. Brochure https://bit.ly/ROY-partnercode . Let us know if you would like a quotation shipped internationally and very competitive rates Speaking Podcast Social Media / Coaching My Other Podcasts https://bio.link/podcaster ============ About my Guest: Peter is the founder of The Sovereign Project which is an institution that protects and reclaims the rights and freedom of each individual by providing powerful tools and education, while uniting others who also choose to be free. There are two states a person can be in this world: you are either sovereign or a slave; the choice is only yours to make. Declaring you are sovereign, that your status is as a free man or woman, requires courage, fortitude and the will to stand up for your rights. This task is much less daunting when you're united with and supported by others of like mind. What we Discussed: - His Workshops & Online Course (3 mins) - The Sovereign Wiki ( 5 mins) - Knowing who to Trust ( 6 mins) - You must learn to do this yourself ( 7 mins) - The Fraud in our Bank Accounts ( 10 mins) - The Tickery with Cash ( 13 mins) - Why has a Bank a PO Box ( 17 mins) - The Event on the 1st Oct 2032 (18 mins) - There are a lot of different ways to get results ( 24 mins) - Speeding Tickets ( 27 mins) - Do Not Fear Authority ( 30 mins) - Which Black Laws should we use ( 34 mins) - Use Delay tactics to benefit you ( 36 mins) - Why you should have witnesses in Court ( 39 mins) - The legal representative getting kickbacks ( 42 mins) - The advantages of a Proper Trust ( 47 mins) - A bond to Insure your Car ( 51 mins) - The Meaning of Mandates ( 53 mins) - Corporate trickery with Countries and companies ( 57 mins) - What can people do to stop the Tyranny ( 1 hr) - Diplomatic Immunity ( 1hr 3 mins) - Rules will be broken anyway ( 1hr 6 mins) - How we should Sign our Name ( 1 hr 10 mins) - If made to Sign how can you preotect yourself ( 1 hr 17 mins) ==================== How to Contact Peter Stone: https://www.thesovereignproject.live/ =============== Donations https://www.podpage.com/speaking-podcast/support/ Speaking Podcast Social Media / Coaching My Other Podcasts + Donations https://bio.link/podcaster
CHICAGO Book by Peter Stone | Music & Lyrics by Sherman Edwards | Based on a concept by Sherman EdwardsWorks Consulted & Reference :1776 (Original Libretto) by Peter Stone & Sherman Edwards"The Making of America's Musical - 1776: The Story Behind the Story" by Jeffrey KareMusic Credits:"Overture" from Dear World (Original Broadway Cast Recording) | Music by Jerry Herman | Performed by Dear World Orchestra & Donald Pippin"The Speed Test" from Thoroughly Modern Millie (Original Broadway Cast Recording) | Music by Jeanine Tesori, Lyrics by Dick Scanlan | Performed by Marc Kudisch, Sutton Foster, Anne L. Nathan & Ensemble"Why God Why" from Miss Saigon: The Definitive Live Recording (Original Cast Recording / Deluxe) | Music by Claude-Michel Schönberg, Lyrics by Alain Boublil & Richard Maltby Jr. | Performed by Alistair Brammer"Back to Before" from Ragtime: The Musical (Original Broadway Cast Recording) | Music by Stephen Flaherty, Lyrics by Lynn Ahrens | Performed by Marin Mazzie"Chromolume #7 / Putting It Together" from Sunday in the Park with George (Original Broadway Cast Recording) | Music & Lyrics by Stephen Sondheim | Performed by Mandy Patinkin, Bernadette Peters, Judith Moore, Cris Groenendaal, Charles Kimbrough, William Parry, Nancy Opel, Robert Westenberg, Dana Ivey, Kurt Knudson, Barbara Bryne"What's Inside" from Waitress (Original Broadway Cast Recording) | Music & Lyrics by Sara Bareilles | Performed by Jessie Mueller & Ensemble"Sit Down, John" from 1776 (Original Broadway Cast Recording) | Music & Lyrics by Sherman Edwards | Performed by Sherman Edwards, William Daniels, 1776 Ensemble, Peter Howard"Maria" from The Sound of Music (Original Soundtrack Recording) | Music by Richard Rodgers, Lyrics by Oscar Hammerstein II | Performed by Evadne Baker, Anna Lee, Portia Nelson, Marni Nixon"My Favorite Things" from The Sound of Music (Original Soundtrack Recording) | Music by Richard Rodgers, Lyrics by Oscar Hammerstein II | Performed by Julie Andrews"Corner of the Sky" from Pippin (New Broadway Cast Recording) | Music & Lyrics by Stephen Schwartz | Performed by Matthew James Thomas“What Comes Next?” from Hamilton (Original Broadway Cast Recording) | Music & Lyrics by Lin-Manuel Miranda | Performed by Jonathan Groff
Covid-19 exposed long-standing weaknesses in the supply chain and transport infrastructure. Dave Joynt, Managing Partner in Brookfield's Infrastructure Group, and Peter Stone, a Senior Vice President at Brookfield focused on portfolio management, discuss the need for supply chain resilience and transport assets like roads, rail, ports and export terminals. They tell us what the outlook is for these industries, why these assets are so important and where the opportunities are. Please read this disclaimer (https://www.brookfield.com/podcast-disclaimer) before listening.
Christian nationalism is on full display at stops of the ReAwaken America tour – conferences that fuse Christian language and symbols with conspiracy theories and election denials. Amanda went inside the most recent one at a Trump property in Miami, and she shares her experiences in this podcast – from assembly-line baptisms to the reaction of the crowd as speakers moved seamlessly from religious worship songs to calls for political violence. SHOW NOTES Segment 1 (starting at 00:48): The Christian nationalism of the ReAwaken America tour Amanda and Holly discussed the ReAwaken America tour in episode 5 of season 4: Christian nationalism and the midterm elections Amanda and Holly mention this article about the ReAwaken Tour in The New York Times by Michelle Goldberg: Whose Version of Christian Nationalism Will Win in 2024? Amanda wrote a response to Michael Flynn's call for “one religion” in 2021, published by Baptist News Global: If you're paying attention to Christian nationalism, you won't be shocked by Michael Flynn's call for ‘one religion under God' Segment 2 (starting at 05:29): The Pastors for Trump event Amanda and Holly mentioned this article on the Pastors for Trump group by Peter Stone for The Guardian: Pro-Trump pastors rebuked for ‘overt embrace of white Christian nationalism' During this segment, we played a clip of Pastor John Bennett speaking during the Pastors for Trump event in Miami. Segment 3 (starting at 15:34): ReAwaken America, baptisms, and our counter-witness Amanda and Holly mentioned Brian Kaylor's reporting on the ReAwaken America tour. You can see his twitter thread with clips from Miami and read his latest piece in the A Public Witness newsletter, which is part of the Word&Way network: Michael Flynn's Soup for the Soulless For more about Baptism and different ways Christian denominations approach it, check out this story from 2001 by the PBS program Religion and Ethics Newsweekly. BJC and Faithful America created electronic billboards that were on trucks and a boat in Miami. See the video of the billboards in this post on the @EndChristianNationailsm Instagram account. Visit ChristiansAgainstChristianNationalism.org to explore the resources provided by the Christians Against Christian Nationalism campaign, including a statement anyone who identifies as a Christian can sign. Respecting Religion is made possible by BJC's generous donors. You can support these conversations with a gift to BJC.
En este nuevo programa de la serie de cine con mayúsculas traemos “CHARADA”. Una combinación perfecta de suspense, romance, intriga y comedia. De la mano de Stanley Donen con una pareja bendecida por la gracia Cary Grant y Audrey Hepburn con unos secundarios de lujo James Coburn, George Kennedy o Walter Mattheau. Además está París y la maravillosa banda sonora de Henry Mancini con ese guion de Peter Stone tan bien construido y esos diálogos directos, irónicos, perfectos. Nos juntamos Javier Jiménez, Oscar Salazar y Paco Dolz para tratar de convenceros de que la veáis o la volváis a ver. Es ese plato que no te importa repetir una y cien veces.
It's been another month of impressive and unsettling AI breakthroughs. And so, along with excitement, these breakthroughs have also been met with concerns about the risks AI could pose to society. Take OpenAI's release of GPT-4, the latest iteration of its ChatGPT chatbot. According to the company, it can pass academic tests (including several AP course exams) and even do your taxes. But NPR's Geoff Brumfiel test drove the software, and found that it also sometimes fabricated inaccurate information.Wednesday more than a thousand tech leaders and researchers - among them, Elon Musk - signed an open letter calling for a six month pause in the development of the most powerful AI systems. NPR's Adrian Florido spoke with one signatory, Peter Stone, a computer science professor at the University of Texas.NPR's Shannon Bond has more reporting on AI and disinformation.In participating regions, you'll also hear a local news segment to help you make sense of what's going on in your community.Email us at considerthis@npr.org.
Peter Stone - The Sweetest Ache - in conversation with David Eastaugh The Sweetest Ache were a six-piece band from Swansea featuring Simon Court (vocals), Stuart Vincent (guitar), David Walters (bass), Geraint Morris (drums), Peter Stone (guitar) and Ian Saberton (keyboards). They recorded three singles and a mini-album for Sarah Records. After Sarah ended, a second album, Grass Roots, was released on Vinyl Japan
Unmovable in Truth 1 Corinthians 15:58 Hebrews 4:12 January 15, 2023 We believe the Bible is the Word of God. It is Inspired (God Breathed). It is Authoritative. It is Infallible (without error). It is Good News for all people. We can't live in a fallen society and not be tainted by it's belief system without Absolute Truth being our compass. 2 Tim 3:16-17 All Scripture is given by inspiration of God, and is profitable for doctrine, for reproof, for correction, for instruction in righteousness, that the man of God may be complete, thoroughly equipped for every good work. Heb 4:12 For the word of God is living and powerful, and sharper than any two-edged sword, piercing even to the division of soul and spirit, and of joints and marrow, and is a discerner of the thoughts and intents of the heart. Faith is required to believe God's Word. Paul wrote in Romans 12:3 that God has given to every believer a ‘measure' of faith. In Romans 10:17 he wrote, “Faith comes by hearing and hearing by the Word of God.” Some say, I need proof the Bible is Truth before I believe. The Truth of God's Word is proven by prophecy, archeology, and history. 1. Fulfilled prophecy proves the Bible is Inspired (God-Breathed). The Bible was written over 1600 years by 40 authors in 66 books. Over 2500 prophecies and around 2000 have already been fulfilled to the letter. The probability of all these prophecies having been fulfilled by chance without error is less than one in 102000 (that is 1 with 2,000 zeros written after it)! 374 of these prophecies were Messianic Prophecies. Peter Stone in his book, Science Speaks, states the probability of even 8 of these prophecies being fulfilled in one man is 1 in 1 quadrillion. That is 1 in 100,000,000,000,000,000! 2. Archeology proves the Bible's accuracy. Archeology has never uncovered anything that contradicts the Bible! Hundreds of cities have been discovered by Archeologists. Nineveh, Jericho, Joppa, Ephesus, Capernaum, Bethel to name a few. Nelson Glueck, a renowned Jewish archaeologist, stated that, “no archaeological discovery has ever disproved a Biblical reference!” As Christians, our faith does NOT depend on Archeological discoveries, our faith rests on the power of God and the revelation of Jesus Christ. 3. History proves the Bible's validity. The Bible is not written as a history book, but it is a book that contains much history. In his article, Historical Proof of the Bible, Jim Franks says, “Roman and Jewish historians were no fans of Christianity, but they give historical proof of the Bible, including the life of Jesus Christ.” The fact that historians have confirmed more than 100 biblical characters in secular history is impressive and provides a remarkable proof for the validity of Scripture. Robert Van Voorst, in his book Jesus Outside the New Testament wrote, “No pagans and Jews who opposed Christianity denied Jesus' historicity or even questioned it” The Word of God, the Bible, is Living and Active. None of this Book is irrelevant or insignificant. The Word of God can affect your life. How it does depends on how you choose to accept and apply the Word. Martin Luther declared, “My Conscience is captive to the Word of God. To go against my conscience is neither right nor safe.” Jesus used the Word of God to defeat Satan, confound the religious and bring hope to the sinner. The Word brings light, direction, guidance, peace, hope, assurance and so much more to the one who chooses to believe. +++++++ You can find our service times on our website: https://allnationstallahassee.com/ You can find sermon highlights on Twitter here: https://mobile.twitter.com/allnationstally
In this episode, composer/lyricist and theatrical producer Gregory Jacobs-Roseman discusses history in musical theater, looking at how American history is written about through Sherman Edwards and Peter Stone's 1969 musical 1776 and others. We also talk about Jerry Herman's song "Avenue A" from the 1996 TV Movie Mrs. Santa Claus. You can write to scenetosong@gmail.com with a comment or question about an episode or about musical theater, or if you'd like to be a podcast guest. Follow on Instagram at @ScenetoSong, on Twitter at @SceneSong, and on Facebook at “Scene to Song with Shoshana Greenberg Podcast.” And be sure to sign up for the new monthly e-newsletter at scenetosong.substack.com. Contribute to the new Patreon. The theme music is by Julia Meinwald. Music played in this episode: "Sit Down, John" from 1776 "The Lees of Old Virginia" from 1776 "Cool, Cool, Considerate Men" from 1776 "Avenue A" from Mrs. Santa Claus.
In this episode, Peter Stone, executive director of Sony AI, joins Ben Wodecki to discuss this year's RoboCup and the AI that can beat you at Gran Turismo.
Freedom Broadcasters Livestream On Sept 22, 2022 Thursday Guest: Sovereign Pete - Peter Stone Topic: ”Do Not Pay: Why It is a Trap” Bio: Pete Stone, founder and CEO of the Sovereign Project, is an author and researcher, covering all aspects of the corrupt global system and the law in relation to our true rights. Through the Project, Pete's ambition is to help people to empower themselves with the same knowledge and connect those who wish to be free to become an unstoppable force for world peace. What we Discussed: - Why he set up the Sovereign Project - Different Jurisdictions - Magna Carta was not for the People - Courts tricking you so that you lose - Different Types of Trusts - Universal Commercial Code - Be Your own Postmaster General - The Corruption with the Birth Certificate - The Difference between Replying & Responding - Notice of Conditional Acceptance - Don't Pay Movement is a Trap - Germany has Gas & Electricity increase over 5 times - Utility Companies are Credit Companies - Fit Your Own Utility meters - Who is the Creditor behind the Debt - Affidavit Statement - True Bill - The Trickery with Fonts and Boxes on Letters - Cash is using font fraud - Peters Workshops - Civil Law comes from the Roman Empire and more How to Contact Peter: Website/Social media links: www.thesovereignproject.live Instagram: @thesovereignproject.live Facebook: https://www.facebook.com/The-Sovereign-Project-103072248658366/ Odysee: @thesovereignproject ================================================= More about Roy: All Podcasts + Coaching and Social Media https://bio.link/podcaster https://awakeningpodcast.org/ Video https://www.bitchute.com/channel/y2XWI0VCPVqX/ =================================================== Interview Panel Grace Asagra, RN MA (Holistic Nurse, US, originally from the Phil) Podcast: Quantum Nurse: Out of the Rabbit Hole from Stress to Bless www.quantumnurse.life Hartmut Schumacher Podcast: GO YOUR OWN PATH https://anchor.fm/hartmut-schumacher-path Roy Coughlan Podcast: AWAKENING https://www.awakeningpodcast.org
Hosts Dan and Josh drift back to a different time when revivals had budgets and stars were appropriately cast in this week's episode of Annie Get Your Gun. Reba McEntire makes a Broadway debut. Peter Stone rewrites a book. There's a trapeze act. Who could ask for anything more? (Wrong Merman) Topics include the ubiquity of Irving Berlin, the death of Jerome Kern, and Dan reveals his constantly-drifting-to-anywhere-in-Europe German accent. Tune in to next week's episode when we discuss Sunday in the Park with George; specifically, the Roundabout Revival's performance from 2008! Contact us: unccpodcast@gmail.com Twitter: @unccpodcast Instagram: @unccpodcast
We thought it might be fun to make ourselves cry by both rewatching Lexi Olinsky's death AND an episode of Chicago Justice. Yep, that's right, we're covering THAT crossover. In addition to the case, we discuss what this crossover lacked for us, Anna taking the job at Med for Severide, how much we miss both Al and Antonio, Gina's hatred of Peter Stone and so much more! News — 2:15 Patron Shoutouts — 19:13 Fire 5x15 — 23:25 PD 4x16 — 1:09:09 Justice 1x01 — 1:25:15 As always, we want to hear what you think; make sure you are following us on Twitter (@meetusatmollys), or email us at meetusatmollys@gmail.com to continue the discussion. Our inbox is always open and a safe space for you all to share your thoughts and feelings.
Peter Stone of The Sovereign Projects joins again for a varying discussion on some practical applications of "common law" and other related matters. Donate to the podcast: BuyMeACoffee.com/PatrickBlack --- Support this podcast: https://anchor.fm/alighton/support
We are pleased to bring to you the guest speaker talk from the June 2022 meeting of the Whitechapel Society. Peter Stone is a committee member of the Docklands History Group, the London Historians and several other London Clubs and Societies and is the author of the highly praised book entitled The Port of London: A Vast Emporium of All Nations https://www.thehistoryoflondon.co.uk www.whitechapelsociety.com
Im Rahmen der Recherchen für den Podcast zu Barbarian: Der mächtigste Krieger und die "Stay Forever Spielt"-Staffel zu Demoniak unterhielten sich Gunnar und Christian mit Peter Stone, dem Gründer und Geschäftsführer des englischen Entwicklers Palace Software. Die Firma Palace brachte in ihrer kurzen Blütezeit zwischen 1984 und 1992 eine Reihe toller Spiele hervor. Peter führt uns im Interview durch die Geschichte des Unternehmens. Das Interview wurde in englischer Sprache geführt, eine deutsche Zusammenfassung gibt es am Ende. Wer einen Podcatcher hat, der Kapitelmarken anzeigt, kann auch direkt an diese Stelle springen. Viel Spaß beim Hören!
1987 erschien ein neues Prügelspiel auf dem C64 - und der noch halbwüchsige Gunnar sowie sein noch halbwüchsigerer Freund Marco versuchten wochenlang nichts anderes, als sich im Duell gegenseitig die Köpfe abzuschlagen. Barbarian hieß der Titel und war genau die Art Spiel, die für männliche Teenager in den 80er unwiderstehlich war: lokaler Multiplayer am gleichen Bildschirm, ein an Conan angelehntes Fantasy-Setting und eine gewagte Mischung aus Sex und Gewalt. Gunnar und Chris sprechen darüber, wie das Spiel im Kontext der Zeit zu bewerten ist, wie die Entwicklung lief und erzählen dabei noch eben die Geschichte des Entwicklers, der englischen Firma Palace Software. Hinweis: Die Folge hat übersetzte O-Töne, wer das nicht mag, kann hier eine Fassung ohne Over-Dub runterladen: https://www.stayforever.de/wp-content/uploads/2022/05/Stay_Forever_Ep121_Barbarian_OV.mp3 Infos zum Spiel: Thema: Barbarian: Der mächtigste Krieger, 1987 (in den USA: "Death Sword") Plattform: C-64, Atari ST, Schneider CPC, ZX Spectrum; später MS-DOS, Amiga, BBC Micro, Apple II Entwickler: Palace Software Publisher: Palace Software (in den USA: Epyx) Genre: Kampfspiel Designer: Steve Brown, Richard Leinfellner, Stanley Schembri Musik: Richard Joseph Podcast-Credits: Sprecher: Christian Schmidt, Gunnar Lott - mit O-Tönen von Peter Stone und Richard Leinfellner Audioproduktion: Fabian Langer, Christian Schmidt Titelgrafik: Paul Schmidt Intro, Outro: Nino Kerl (Ansage); Chris Hülsbeck (Musik)
How can our current systems of government be updated to reflect modern needs? Through citizen assemblies and a more accessible format, there may be a more significant opportunity for change. Press play to learn: The function of a stratified sample Issues in which a citizen council has made a difference in policy Examples of when such strategies have been implemented Offer: This episode is sponsored by Bowmar Nutrition. To receive a 5% discount, use the code GENIUS5 at checkout. Go to BowmarNutrition.com to shop now! Dr. Peter Stone, an Associate Professor of Political Science at Trinity College, Dublin, shares his work on how ancient practices can revolutionize modern politics. Political change and how companies or corporations can be run have long been left up to a very small group of people. However, if we can accept a change in thinking, the true benefits of democracy can be realized. A fresh perspective can be brought into leadership by selecting positions of power through a stratified random selection. Even if this is not implemented for direct decisions, citizens' input can offer helpful reality checks to even established decision-making procedures. To learn more, visit https://www.sortitionfoundation.org Episode also available on Apple Podcast: http://apple.co/30PvU9C
Peter Stone is the founder and CEO of the Sovereign Project. He is an author and researcher, covering all aspects of the corrupt global system and the law in relation to our true rights. Help us fight censorship! Get immediate access to exclusive and censorship free content by donation or free by becoming a member here
Peter Stone, AKA the Sovereign Man, joins us on The AJ Roberts Show to explain how we in the western world can break free of the matrix and in to our own sovereign being. In this information packed episode Peter explains how the majority of our lives have been a big cover up by globalists, central banks and governments. You will learn how we ALL have our own sovereign trust funds issued at birth. How our birth certificates aren't what you think they are, how mortgages, council tax, credit cards etc are all fraudulent and we're paying interest on our own money.
Episode SummaryErik McCormack is a singer/songwriter/entertainer from East Setauket. Erik is a superb vocalist/guitarist with an interesting start in the music industry. We'll unpack his Broadway roots as well as his unique live performance, vocal harmonies, and expert use of a looping device. Episode NotesThis episode features Erik McCormack, who continues to impress audiences with his vocal and guitar-playing skills. Erik offers his advice for those new to the gig scene on Long Island. He talks about rubbing elbows with famous artists and shares his insight on the Long Island Music scene.Erik's original song in this episode: "A Song" was recorded back when he was seventeen. We'll also hear is cover of “Don't Worry, be Happy” which showcases his talent for using a looping device from time to time in his live performances.Erik's vocal talents were discovered early in life as his toured with Marie Osmond through twenty-six states in the reboot of “The Sound of Music” when he was thirteen. At seventeen, he made his Broadway debut in the musical 1776 as a Courier, and shared the stage with Brent Spiner (John Adams), from Star Trek the Next Generation fame. 1776 is a musical with music and lyrics by Sherman Edwards and a book by Peter Stone. The show is based on the events leading up to the signing of the Declaration of Independence, telling a story of the efforts of John Adams to persuade his colleagues to vote for American independence and to sign the document.In 2001, Erik returned to the stage the show with “The Adventures of Tom Sawyer” If you enjoyed the show, please Subscribe, Rate, and Review at Apple Podcast, Spotify, or wherever you get your podcasts. Connect with The Long Island Sound Podcast:Website: Https://GigDestiny.com/podcast Follow Steve Yusko, GigDestiny.com, and his adventures: Website: https://www.GigDestiny.com Twitter, Instagram, YouTube, Facebook Spotify: https://open.spotify.com/show/21aCeQThe growth of The Long Island Sound Podcast has been exponential. Help us grow the show!Subscribe to the GigDestiny.com Site here for bonus contentSubscribe to our YouTube ChannelCall the Listener Line & leave your comments: (631) 800-3579 Remember to Rate & Review the show! Help us keep the conversation going with your donation - Click Right Here or go to GigDestiny.com Buzzsprout - Let's get your podcast launched! Start for FREEBuzzsprout - Let's get your podcast launched! Start for FREEDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Episode #3: The New Heroes The first part of a multi-episode exploration of Neal Adams' Continuity Comics properties. 00:00:17 Neal Adams v Jack Kirby 00:04:08 The New Heroes / Mike Nasser 00:09:34 Starspawn 00:13:46 Shadow Hunter & Skateman 00:15:13 Ms. Mystic by Neal Adams, Mike Nasser, & Various 00:45:06 Urth 4 by Trevor Von Eeden, Ron Wilson & Peter Stone 00:57:39 Zero Patrol by Esteban Maroto & Neal Adams 01:03:45 Shaman by Neal Adams & Various 01:11:35 Captain Power and the Soldiers of the Future 01:17:23 Echo of Futurepast 01:22:16 Bucky O'Hare and the Toad Menace by Michael Golden & Larry Hama Episode Gallery Twitter Facebook tumblr ♞#дɱдŻİŊƓĤƐƦʘƐʂ♘ rolledspinepodcasts@gmail.com WordPress --- Send in a voice message: https://anchor.fm/diabolu-frank/message
Peter Stone of the Sovereign Project returns for a deeper dive into common law and its applications. BECOME A MONTHLY SUPPORTER A LIGHT ON PODCAST: https://anchor.fm/patrick-black7/support Also available on:
Masks? Vaccine mandates? Peter Stone from the Sovereign Project stops by to talk more about Common Law and reclaiming your sovereignty from the fraudulent legal system. SUPPORT A LIGHT ON: https://anchor.fm/patrick-black7/support FIND PETER HERE: Website: www.thesovereignproject.live Instagram: @thesovereignproject.live Facebook: https://www.facebook.com/The-Sovereign-Project-103072248658366/ Odysee: @thesovereignproject --- Support this podcast: https://anchor.fm/alighton/support
Lani Gonzalez is back to discuss her favorite film star and his (un)surprisingly(?) charming 1964 film, Grant's penultimate screen appearance, also his last starring role, and the willingness for the most debonair branded film-star to finally show silver hair, beard stubble, and an untucked shirt. Also:- the juvenile delinquency of director Ralph Nelson;- Oscar-winning writer Peter Stone's varied career;- Leslie Caron;- her character's curiosity for the taste of blood;- and two recent Grant biographies, The Making of a Hollywood Legend by Mark Glancy (which Lani reviewed for Book & Film Globe) plus Scott Eyman's A Brilliant Disguise (which I kinda skimmed).And:- Lani's favorite theatrical screening of Grant's Charade;- trying to locate where the crop-duster sequence from North by Northwest would've taken place on IN-41;- roles Grant turned down throughout his 40 years of commercial success;- how his early vaudeville led him to master physical comedy;- his wives;- LSD;- and Lani's picks for Grant's best, worst, and most underrated films.Gonzalez writes about film for both Book and Film Globe and, alongside her husband (and former guest-host) AJ, their blog Cinema Then and Now.Father Goose is currently available on the Criterion Channel under the banner of “Cary Grant Comedies.” But hurry quick, as it's leaving February 28.
“ don't change your style, Come as you are and be true to yourself” Peter Stone ( @peterstone_music ) Started his music career incredibly young. He started recording for fun at the age of 14 an quickly started getting noticed as his group started doing more and more shows. His first single “number one” instantly got the attention of local magazines and media outlets. Now at 22 hes been traveling and doing shows all summer. Working a full time job by day, creating and producing music by evening and promoting by night is how Peter spends most of his time. When he does step away hes hanging out with his boys playing basketball, messing around with music or playing chess. One of the biggest pieces of advice he can give is be aware of those people you surround yourself with. “ if you're around a bunch of people going where you want to be, you might just get there too” keeping his circle of friends tight and intentional is something that comes up over and over again in the interview.. For him being an independent artist means being free. He may not have all the support of a labels team but that's why he has to work so hard to have the same impact. He talks about “ don't treat your passion like a job, keep doing it the way you got started, FUN. Peter advises all young artists or anyone getting start to stay true to who they are, develop your sound or craft and learn as much as possible. Even though the summer comes to a close Peter intends to leverage his work ethic and connections to book more shows and travel to more place in the future. To listen to Peter's music and connect with him online follow the links below: https://www.instagram.com/peterstone_music/ https://soundcloud.com/peter-stone-475647293/fear-ft-upstars-prod-b-mac Connect with us https://www.instagram.com/theunitedpromotion follow The United Promotion on Instagram at: https://www.instagram.com/theunitedpromotion
The Definitive Debunking of the Cohen-In-Prague Canard. For some weird reason, the Deep State cannot let go of Prague, and thus we smell desperation at the dark heart of SpyGate. We recap the claim from the bogus Steele Dossier that Trump advisor Michael Cohen visited Prague in 2016 to conspire with the Russians to defeat Hillary Clinton. It just so happens that Fusion GPS, Christopher Steele, Nellie Ohr and Glenn Simpson got The Wrong Cohen in their illicit searches of the NSA database. Whoops! But they used the information anyways, to bamboozle FISA judges and surveil the Trump Campaign. Later on, the fraudulent claims were used to initiate the Mueller Investigation and cause harm to President Trump in the arena of public opinion. As a case study in the construction and dissemination of Propaganda, we dismantle the latest recrudescence of Cohen-In-Prague as served up this week by Greg Gordon and Peter Stone of the McClatchy DC Bureau. Their anonymous sources who heard it from anonymous sources. We review the ongoing string of outright denials from Cohen's attorney Lanny Davis, and also from Michael Cohen himself. We survey the reporting of Greg Miller of The Washington Post, who has found zero evidence of Cohen-In-Prague. Further, we sample the MSNBC interview of McClatchy's Greg Gordon with a surprisingly skeptical Joy Reid. A total embarrassment to the journalistic profession. The mistakes made in and around the Prague linchpin reveal the methods, motives and wholesale corruption of America's Deep State. With Listener Calls & Music via REO Speedwagon, Sia, Frei Wild, George Harrison and Skeeter Davis.See omnystudio.com/listener for privacy information.
Host Cyrus Webb welcomes author Peter Stone to #ConversationsLIVE to discuss what it's been like to see the early response to his debut novel THE PERFECT CANDIDATE, how the story evolved and the relatability of the main character to himself and others.