Podcast appearances and mentions of William Saunders

  • 43PODCASTS
  • 76EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Oct 29, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about William Saunders

Latest podcast episodes about William Saunders

The Ricochet Audio Network Superfeed
The Federalist Society's Teleforum: Religious Liberty and the Court: Looking Ahead to the 2024-2025 Term

The Ricochet Audio Network Superfeed

Play Episode Listen Later Oct 29, 2024


The Federalist Society is proud to host Mark Rienzi, President of the Becket Fund and Professor of Law at the Catholic University of America, for this year's annual discussion of Religious Liberty at the Court. This webinar will be moderated by William Saunders, Professor and Co-director of the Center for Religious Liberty at Catholic University […]

Teleforum
Religious Liberty and the Court: Looking Ahead to the 2024-2025 Term

Teleforum

Play Episode Listen Later Oct 29, 2024 54:40


The Federalist Society is proud to host Mark Rienzi, President of the Becket Fund and Professor of Law at the Catholic University of America, for this year’s annual discussion of Religious Liberty at the Court. This webinar will be moderated by William Saunders, Professor and Co-director of the Center for Religious Liberty at Catholic University of America. Please join us for this latest installment which will look at recent developments in religious liberty litigation and ahead to the Supreme Court’s October term. Featuring:Prof. Mark L. Rienzi, President, Becket Fund for Religious Liberty; Professor of Law and Co-Director of the Center for Religious Liberty, Catholic University; Visiting Professor, Harvard Law School(Moderator) Prof. William L. Saunders, Director of the Program in Human Rights, Catholic University of America

Poultry Keepers Podcast
William Saunders On Phoenix Chickens

Poultry Keepers Podcast

Play Episode Listen Later Oct 8, 2024 22:07 Transcription Available


In the Poultry Keepers Podcast, Rip Stalvey interviews William Saunders, a Phoenix poultry breeder from Florida. William shares his journey, starting at age four, and his breeding experiences over 19 years focusing on Phoenix birds.  He discusses the rewards and challenges of breeding Phoenix, including weather conditions in Florida and maintaining bird health. He outlines his breeding process, emphasizing the importance of selecting traits like leg length and feather condition, and discusses his management practices such as feeding high-protein diets and separating males and females to prevent feather picking.  William also shares his methods for maintaining bird lineage, customizing pen setups, and his participation in the American Phoenix Breeders Association. His advice to newcomers is to start with good stock and continuously learn from experienced breeders.You can email us at - poultrykeeperspodcast@gmail.comJoin our Facebook Groups:Poultry Keepers Podcast - https://www.facebook.com/groups/907679597724837Poultry Keepers 360 - - https://www.facebook.com/groups/354973752688125Poultry Breeders Nutrition - https://www.facebook.com/groups/4908798409211973Check out the Poultry Kepers Podcast YouTube Channel - https://www.youtube.com/@PoultryKeepersPodcast/featured

Big Technology Podcast
What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig

Big Technology Podcast

Play Episode Listen Later Jul 3, 2024 48:13


William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what's troubling ex-OpenAI safety team members. We discuss whether the Saudners' former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the 'Right to Warn' a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community. ---- You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Your Undivided Attention
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Your Undivided Attention

Play Episode Listen Later Jun 7, 2024 37:47


This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry's leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. RECOMMENDED MEDIA The Right to Warn Open Letter My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI's policy of non-disparagement.RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom Wheeler Spotlight on AI: What Would It Take For This to Go Well? Big Food, Big Tech and Big AI with Michael Moss Can We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Journey of the Rhode Runner
Episode 17: Creating Your Own Narrative with Angie Leitnaker

Journey of the Rhode Runner

Play Episode Listen Later May 29, 2024 77:53


On this episode we are thrilled to welcome Angie Leitnaker. Angie's journey through personal growth and overcoming grief has made her a passionate advocate for human potential. With 13 years of teaching experience and extensive work hosting global retreats, she has dedicated herself to helping others discover their inner greatness. Angie believes everyone wants to be seen, heard, and valued, and she emphasizes practical tools to not just survive, but truly thrive. Founder of C.A.P.E. Global, she empowers children and adults to break through barriers and achieve lasting fulfillment. Join us as we explore Angie's inspiring journey and insights, and how can discover or rediscover your own inner greatness!   This episode is dedicsted to 22 Too Many veteran Andrew Saunders Andrew Jonathan Saunders, 25, passed away Wednesday, September 2, 2015. He was born in Asheville, NC on April 27, 1990 to Susan and William Saunders. Andrew went to school in Ridgecrest, CA where he found a love for playing music in jazz band with the trumpet, as well as playing guitar almost nonstop with his many friends. In addition to his musical talent, Andrew was also an accomplished discus thrower, being voted MVP two years in a row before competing in a CIF Championship. Andrew joined the US Army after high school. Once he graduated from basic training at the top of his class, he went to Apache helicopter mechanic training, where he also graduated top of his class. After tours in Ansbach, Germany, and Afghanistan, Andrew learned that he had been accepted into flight school, realizing his dream. After graduating from warrant officer school, he finished flight school as the “Distinguished Graduate.” He was then assigned as a Blackhawk pilot at Fort Carson, CO, where he was living at the time of his death. Andrew lived life to the fullest and brought out the best in everyone who knew him. He loved deeply and made lifelong friendships at every turn of his life. Andrew is survived by his mother Susan Lasell, step-father Richard Lasell, brother Brian Hoppus, and sister Sarah Hoppus. Final Rest: Riverside National Cemetery. ---------------------------------------------------------------- Angie can be found at https://www.cape-global.com/ Instagram: @angieleitnaker Facebook: CAPE Global - Creating A Powerful Experience --------------------------------------------------------------- Kerri can be found on Instagram: @running_with_the_rockstar Facebook: Every Run Has a Story   You can find Paul - The Rhode Runner in the following places: Twitter: @TheRhodeRunner Instagram: @TheRhodeRunner Facebook   Inspiring Journeys can be found on: InspiringJourneys.net Instagram: @InspiringJourneysPod Facebook   You can also download and subscribe to the Inspiring Journeys Podcast at: Apple Podcasts iHeartRadio Spotify  

Effective Altruism Forum Podcast
“Articles about recent OpenAI departures” by bruce

Effective Altruism Forum Podcast

Play Episode Listen Later May 24, 2024 2:21


This is a link post. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them. Some quotes perhaps worth highlighting: Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there'll be much focus on avoiding catastrophic risk from future AI models. -Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more"). “I joined with substantial hope that OpenAI [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 17th, 2024 Source: https://forum.effectivealtruism.org/posts/ckYw5FZFrejETuyjN/articles-about-recent-openai-departures --- Narrated by TYPE III AUDIO.

The Walt Blackman Show
The Crisis of Black Abortion Rates and the Ongoing Battle for Natural Rights

The Walt Blackman Show

Play Episode Listen Later May 20, 2024 24:16 Transcription Available


Send us a Text Message.Could the decline in the national percentage of the Black population be signaling a crisis many are hesitant to acknowledge? This urgent discussion is sparked by the alarming high abortion rates in the Black community, a phenomenon some are bold enough to label a genocide. Join us as we shine a light on the outcry from Black American leaders and pastors who recently protested a new Planned Parenthood clinic in Charlotte, North Carolina. The data presented is sobering—scholar Michael Novak points to the staggering 16 million Black babies aborted since 2002. It's a wake-up call demanding political engagement and community action, and we don't shy away from the tough questions or the uncomfortable truths that need to be faced head-on.The battle over natural rights is not a relic of history—it is a living debate, directly tied to the abortion conversation. We explore this philosophical struggle with insights from pro-life advocates like William Saunders and Harley Atkins, who compare the dehumanization of slavery to the denial of rights to the unborn. This episode confronts the government's inconsistent protection of life, liberty, and property, and the profound impact this has had on Black communities. As we dissect the parallels between the Dred Scott case and the devaluation of unborn lives, we invite you to join us in a powerful call for reflection on the society we want to build. It's not just a matter of policy; it's a question of our collective humanity.Support the Show.

The Nonlinear Library
EA - Artiabout recent OpenAI departures by bruce

The Nonlinear Library

Play Episode Listen Later May 17, 2024 1:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Artiabout recent OpenAI departures, published by bruce on May 17, 2024 on The Effective Altruism Forum. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them. Some quotes perhaps worth highlighting: Even when the team was functioning at full capacity, that "dedicated investment" was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power - perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there'll be much focus on avoiding catastrophic risk from future AI models. Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more"). "I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen," Kokotajlo told me. "I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit." (Additional kudos to Daniel Kokotajlo for not signing additional confidentiality obligations on departure, which is plausibly relevant for Jan too given his recent thread). Edit: Shakeel's article on the same topic. Kelsey's article about the nondisclosure/nondisparagement provisions that OpenAI employees have been offered Wired claims that OpenAI confirms that their "superalignment team is no more". 1. ^ Covered by Shakeel/Wired, but thought it'd be clearer to list all names together Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

EMPIRE LINES
Giolo's Lament, Pio Abad (2023) (EMPIRE LINES x Ashmolean Museum)

EMPIRE LINES

Play Episode Listen Later Mar 28, 2024 18:12


Artist and archivist Pio Abad draws out lines between Oxford, the Americas, and the Philippines, making personal connections with historic collections, and reconstructing networks of trafficking, tattooing, and 20th century dictatorships. Pio Abad's practice is deeply informed by world histories, with a particular focus on the Philippines. Here, he was born and raised in a family of activists, at a time of conflict and corruption under the conjugal dictatorship of Ferdinand and Imelda Marcos (1965-1986). His detailed reconstructions of their collection - acquired under the pseudonyms of Jane Ryan and William Saunders - expose Western/Europe complicities in Asian colonial histories, from Credit Suisse to the American Republican Party, and critique how many museums collect, display, and interpret the objects they hold today. In his first UK exhibition in a decade, titled for Mark Twain's anti-imperial satire, ‘To the Person Sitting in Darkness' (1901), Pio connects both local and global histories. With works across drawing, text, and sculpture, produced in collaboration with his partner, Frances Wadworth Jones, he reengages objects found at the University of Oxford, the Pitt Rivers Museum, St John's College, and Blenheim Palace - with histories often marginalised, ignored, or forgotten. He shares why his works often focus on the body, and how two tiaras, here reproduced in bronze, connect the Romanovs of the Russian Empire, to the Royal Family in the UK, all via Christie's auction house. Pio shares why he often shows alongside other artists, like Carlos Villa, and the political practice of Pacita Abad, a textile artist and his aunt. He talks about the ‘diasporic' objects in this display, his interest in jewellery, and use of media from bronze, to ‘monumental' marble. Finally, Pio suggests how objects are not things, but travelling ‘networks of relationships', challenging binaries of East and West, and historic and contemporary experiences, and locating himself within the archives. Ashmolean NOW: Pio Abad: To Those Sitting in Darkness runs at the Ashmolean Museum in Oxford until 8 September 2024, accompanied by a full exhibition catalogue. Fear of Freedom Makes Us See Ghosts, Pio's forthcoming exhibition book, is co-published by Ateneo Art Gallery and Hato Press, and available online from the end of May 2025. For other artists who've worked with objects in Oxford's museum collections, read about: - Ashmolean NOW: Flora Yukhnovich and Daniel Crews-Chubbs, at the Ashmolean Museum. - Marina Abramović: Gates and Portals, at Modern Art Oxford and the Pitt Rivers Museum. For more about the history of the Spanish Empire in the Philippines, listen to Dr. Stephanie Porras' EMPIRE LINES on an ⁠Ivory Statue of St. Michael the Archangel, Basilica of Guadalupe (17th Century)⁠. And hear Taloi Havini, another artist working with Silverlens Gallery in the Philippines, on Habitat (2017), at Mostyn Gallery for Artes Mundi 10. WITH: Pio Abad, London-based artist, concerned with the personal and political entanglements of objects. His wide-ranging body of work, encompassing drawing, painting, textiles, installation and text, mines alternative or repressed historical events and offers counternarratives that draw out threads of complicity between incidents, ideologies and people. He is also the curator of the estate of his aunt, the Filipino American artist Pacita Abad. PRODUCER: Jelena Sofronijevic. Follow EMPIRE LINES on Instagram: ⁠instagram.com/empirelinespodcast⁠ And Twitter: ⁠twitter.com/jelsofron/status/1306563558063271936⁠ Support EMPIRE LINES on Patreon: ⁠patreon.com/empirelines

The Nonlinear Library
AF - Transformer Debugger by Henk Tillman

The Nonlinear Library

Play Episode Listen Later Mar 12, 2024 1:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformer Debugger, published by Henk Tillman on March 12, 2024 on The AI Alignment Forum. Transformer Debugger (TDB) is a tool developed by OpenAI's Superalignment team with the goal of supporting investigations into circuits underlying specific behaviors of small language models. The tool combines automated interpretability techniques with sparse autoencoders. TDB enables rapid exploration before needing to write code, with the ability to intervene in the forward pass and see how it affects a particular behavior. It can be used to answer questions like, "Why does the model output token A instead of token B for this prompt?" or "Why does attention head H to attend to token T for this prompt?" It does so by identifying specific components (neurons, attention heads, autoencoder latents) that contribute to the behavior, showing automatically generated explanations of what causes those components to activate most strongly, and tracing connections between components to help discover circuits. These videos give an overview of TDB and show how it can be used to investigate indirect object identification in GPT-2 small: Introduction Neuron viewer pages Example: Investigating name mover heads, part 1 Example: Investigating name mover heads, part 2 Contributors: Dan Mossing, Steven Bills, Henk Tillman, Tom Dupré la Tour, Nick Cammarata, Leo Gao, Joshua Achiam, Catherine Yeh, Jan Leike, Jeff Wu, and William Saunders. Thanks to Johnny Lin for contributing to the explanation simulator design. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Teleforum
Talks with Authors: What It Means to Be Human

Teleforum

Play Episode Listen Later Oct 26, 2023 85:49


In What It Means to Be Human - The Case for the Body in Public Bioethics Prof. O. Carter Snead investigates the tension between the natural limits of the human body and the political philosophy of autonomy, and the legal and policy challenges that arise when those two conflict. He proposes a new paradigm of how to understand being human and applies it to complex issues of bioethics, laying out a framework of embodiment and dependence. Join us for a special 90-minute webinar conversation with Prof. Snead moderated by Prof. William Saunders on “What it Means to Be Human” -both philosophically and practically. Featuring: --Prof. O. Carter Snead, Professor of Law, Director, de Nicola Center for Ethics and Culture, & Concurrent Professor of Political Science, University of Notre Dame Law School--[Moderator] Prof. William L. Saunders, Professor - Human Rights, Religious Liberty, Bioethics, Catholic University of America

The Ricochet Audio Network Superfeed
The Federalist Society's Teleforum: Religious Liberty and the Court – Looking Ahead to the Next Term

The Ricochet Audio Network Superfeed

Play Episode Listen Later Sep 27, 2023


For the past few Supreme Court terms we have hosted Mark Rienzi, President of the Becket Fund and Professor of Law at Catholic University of America, for a discussion of Religious Liberty at the Court moderated by William Saunders, Professor and Co-director of the Center for Religious Liberty at Catholic University of America. This installment […]

Teleforum
Religious Liberty and the Court - Looking Ahead to the Next Term

Teleforum

Play Episode Listen Later Sep 26, 2023 54:26


For the past few Supreme Court terms we have hosted Mark Rienzi, President of the Becket Fund and Professor of Law at Catholic University of America, for a discussion of Religious Liberty at the Court moderated by William Saunders, Professor and Co-director of the Center for Religious Liberty at Catholic University of America. This installment looked at the most recent term including the unanimous holding in Groff v. DeJoy and provided a preview of the October term.Featuring: --Prof. Mark L. Rienzi, President, Becket Fund for Religious Liberty; Professor of Law and Co-Director of the Center for Religious Liberty, Catholic University; Visiting Professor, Harvard Law School--[Moderator] Prof. William L. Saunders, Professor - Human Rights, Religious Liberty, Bioethics, Catholic University of America

Teleforum
Talks with Authors: Our Dear-Bought Liberty: Catholics and Religious Toleration in Early America

Teleforum

Play Episode Listen Later Sep 21, 2023 57:01


In his new book Our Dear-Bought Liberty: Catholics and Religious Toleration in Early America, Professor Michael D. Breidenbach investigates the way American Catholics fundamentally contributed to the conception of a separation between Church and State in the founding era, overcoming suspicions of loyalties to a foreign power with a conciliatory approach. In this installment in our “Talks with Authors” series, Prof. Breidenbach joins us to discuss his book and the story it tells in a conversation moderated by Prof. William Saunders.Featuring:--Prof. Michael D. Breidenbach, Associate Professor of History, Ave Maria University & Senior Affiliate for Legal Humanities, Program for Research on Religion and Urban Civil Society, University of Pennsylvania--(Moderator) Prof. William Saunders, Professor - Human Rights, Religious Liberty, Bioethics, Catholic University of America

The Nonlinear Library
LW - Neuronpedia - AI Safety Game by hijohnnylin

The Nonlinear Library

Play Episode Listen Later Jul 26, 2023 3:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuronpedia - AI Safety Game, published by hijohnnylin on July 26, 2023 on LessWrong. Neuronpedia is an AI safety game that documents and explains each neuron in modern AI models. It aims to be the Wikipedia for neurons, where the contributions come from users playing a game. Neuronpedia wants to connect the general public to AI safety, so it's designed to not require any technical knowledge to play. Neuronpedia is in experimental beta: getting its first users in order to collect feedback, ideas, and build an initial community. OBJECTIVES Increase understanding of AI to help build safer AI Increase public engagement, awareness, and education in AI safety CURRENT STATUS I started working on Neuronpedia three weeks ago, and I'm posting on LessWrong to develop an initial community and for feedback and testing. I'm not posting it anywhere else, please do not share it yet in other forums like Reddit. There's an onboarding tutorial that explains the game, but to summarize: It's a word association game. You're shown one neuron ("puzzle") at a time, and its highest activations ("clues"). You then either vote for an existing explanation, or submit your own explanation. Neuronpedia's first "campaign" is explaining gpt2-small, layer 6. There is an "advanced mode" that allows testing custom activation text and shows more details/filters. Click "Simple" at the top right to toggle it. WHAT YOU CAN DO Play @ neuronpedia.org - feel free to use a throwaway GitHub account to log in. Give feedback, ideas, and ask questions. Join the community Discord: THE VISION Millions of casual and technical users play Neuronpedia daily, trying to solve each neuron (like NYT crossword/Wordle). There are weekly/monthly contests ("side quests"). Top scorers are ranked on leaderboards by country, region, etc. Neuronpedia sparks interest in AI safety for thousands of people and they contribute in other ways (switch fields, do research, etc). Researchers use the data to build safer and more predictable AI models. Companies post updated versions of their AI models (or parts of them) as new "campaigns" and iterate through increasingly safer models. HOW NEURONPEDIA CAME ABOUT After moving on from my previous startup, I reached out to 80,000 Hours for career advice. They connected me to William Saunders who provided informal (not affiliated with any company) guidance on what might be useful products to develop for AI safety research. Three weeks ago, I started prototyping versions of Neuronpedia, starting as a reference website, then eventually iterating into a game. Neuronpedia is seeded with data and tools from OpenAI's Automated Interpretability and Neel Nanda's Neuroscope. IS THIS SUSTAINABLE? Unclear. There's no revenue model, and there is nobody supporting Neuronpedia. I'm working full time on it and spending my personal funds on hosting, inference servers, OpenAI API, etc. If you or your organization would like to support this project, please reach out at johnny@neuronpedia.org. COUNTERARGUMENTS AGAINST NEURONPEDIA These are reasons Neuronpedia could fail to achieve one or more of its objectives. They're not insurmountable, but good to keep in mind. Can't get enough people to care about AI safety or think it's a real problem. Neurons are the wrong "unit" for useful interpretability and Neuronpedia is unable to adapt to the correct "unit" (groups of neurons, etc). Even the best human explanations are not good. Scoring algorithm for explanation is bad and can't be improved. Not engaging enough - the game isn't balanced, doesn't have enough "loops", etc. Bugs. Lack of funds. AI companies shut it down via copyright claims, cease and desist, etc. Unable to contain abusive users or spam. Too slow to stop misaligned AI. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please ...

The Nonlinear Library: LessWrong
LW - Neuronpedia - AI Safety Game by hijohnnylin

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 26, 2023 3:57


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuronpedia - AI Safety Game, published by hijohnnylin on July 26, 2023 on LessWrong. Neuronpedia is an AI safety game that documents and explains each neuron in modern AI models. It aims to be the Wikipedia for neurons, where the contributions come from users playing a game. Neuronpedia wants to connect the general public to AI safety, so it's designed to not require any technical knowledge to play. Neuronpedia is in experimental beta: getting its first users in order to collect feedback, ideas, and build an initial community. OBJECTIVES Increase understanding of AI to help build safer AI Increase public engagement, awareness, and education in AI safety CURRENT STATUS I started working on Neuronpedia three weeks ago, and I'm posting on LessWrong to develop an initial community and for feedback and testing. I'm not posting it anywhere else, please do not share it yet in other forums like Reddit. There's an onboarding tutorial that explains the game, but to summarize: It's a word association game. You're shown one neuron ("puzzle") at a time, and its highest activations ("clues"). You then either vote for an existing explanation, or submit your own explanation. Neuronpedia's first "campaign" is explaining gpt2-small, layer 6. There is an "advanced mode" that allows testing custom activation text and shows more details/filters. Click "Simple" at the top right to toggle it. WHAT YOU CAN DO Play @ neuronpedia.org - feel free to use a throwaway GitHub account to log in. Give feedback, ideas, and ask questions. Join the community Discord: THE VISION Millions of casual and technical users play Neuronpedia daily, trying to solve each neuron (like NYT crossword/Wordle). There are weekly/monthly contests ("side quests"). Top scorers are ranked on leaderboards by country, region, etc. Neuronpedia sparks interest in AI safety for thousands of people and they contribute in other ways (switch fields, do research, etc). Researchers use the data to build safer and more predictable AI models. Companies post updated versions of their AI models (or parts of them) as new "campaigns" and iterate through increasingly safer models. HOW NEURONPEDIA CAME ABOUT After moving on from my previous startup, I reached out to 80,000 Hours for career advice. They connected me to William Saunders who provided informal (not affiliated with any company) guidance on what might be useful products to develop for AI safety research. Three weeks ago, I started prototyping versions of Neuronpedia, starting as a reference website, then eventually iterating into a game. Neuronpedia is seeded with data and tools from OpenAI's Automated Interpretability and Neel Nanda's Neuroscope. IS THIS SUSTAINABLE? Unclear. There's no revenue model, and there is nobody supporting Neuronpedia. I'm working full time on it and spending my personal funds on hosting, inference servers, OpenAI API, etc. If you or your organization would like to support this project, please reach out at johnny@neuronpedia.org. COUNTERARGUMENTS AGAINST NEURONPEDIA These are reasons Neuronpedia could fail to achieve one or more of its objectives. They're not insurmountable, but good to keep in mind. Can't get enough people to care about AI safety or think it's a real problem. Neurons are the wrong "unit" for useful interpretability and Neuronpedia is unable to adapt to the correct "unit" (groups of neurons, etc). Even the best human explanations are not good. Scoring algorithm for explanation is bad and can't be improved. Not engaging enough - the game isn't balanced, doesn't have enough "loops", etc. Bugs. Lack of funds. AI companies shut it down via copyright claims, cease and desist, etc. Unable to contain abusive users or spam. Too slow to stop misaligned AI. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please ...

The Nonlinear Library
LW - Conditioning Predictive Models: Large language models as predictors by evhub

The Nonlinear Library

Play Episode Listen Later Feb 3, 2023 20:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conditioning Predictive Models: Large language models as predictors, published by evhub on February 2, 2023 on LessWrong. This is the first of seven posts in the Conditioning Predictive Models Sequence based on the forthcoming paper “Conditioning Predictive Models: Risks and Strategies” by Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, and Kate Woolverton. Each post in the sequence corresponds to a different section of the paper. We will be releasing posts gradually over the course of the next week or so to give people time to read and digest them as they come out. We are starting with posts one and two, with post two being the largest and most content-rich of all seven. Thanks to Paul Christiano, Kyle McDonell, Laria Reynolds, Collin Burns, Rohin Shah, Ethan Perez, Nicholas Schiefer, Sam Marks, William Saunders, Evan R. Murphy, Paul Colognese, Tamera Lanham, Arun Jose, Ramana Kumar, Thomas Woodside, Abram Demski, Jared Kaplan, Beth Barnes, Danny Hernandez, Amanda Askell, Robert Krzyzanowski, and Andrei Alexandru for useful conversations, comments, and feedback. Abstract Our intention is to provide a definitive reference on what it would take to safely make use of predictive models in the absence of a solution to the Eliciting Latent Knowledge problem. Furthermore, we believe that large language models can be understood as such predictive models of the world, and that such a conceptualization raises significant opportunities for their safe yet powerful use via carefully conditioning them to predict desirable outputs. Unfortunately, such approaches also raise a variety of potentially fatal safety problems, particularly surrounding situations where predictive models predict the output of other AI systems, potentially unbeknownst to us. There are numerous potential solutions to such problems, however, primarily via carefully conditioning models to predict the things we want—e.g. humans—rather than the things we don't—e.g. malign AIs. Furthermore, due to the simplicity of the prediction objective, we believe that predictive models present the easiest inner alignment problem that we are aware of. As a result, we think that conditioning approaches for predictive models represent the safest known way of eliciting human-level and slightly superhuman capabilities from large language models and other similar future models. 1. Large language models as predictors Suppose you have a very advanced, powerful large language model (LLM) generated via self-supervised pre-training. It's clearly capable of solving complex tasks when prompted or fine-tuned in the right way—it can write code as well as a human, produce human-level summaries, write news articles, etc.—but we don't know what it is actually doing internally that produces those capabilities. It could be that your language model is: a loose collection of heuristics,[1] a generative model of token transitions, a simulator that picks from a repertoire of humans to simulate, a proxy-aligned agent optimizing proxies like sentence grammaticality, an agent minimizing its cross-entropy loss, an agent maximizing long-run predictive accuracy, a deceptive agent trying to gain power in the world, a general inductor, a predictive model of the world, etc. Later, we'll discuss why you might expect to get one of these over the others, but for now, we're going to focus on the possibility that your language model is well-understood as a predictive model of the world. In particular, our aim is to understand what it would look like to safely use predictive models to perform slightly superhuman tasks[2]—e.g. predicting counterfactual worlds to extract the outputs of long serial research processes.[3] We think that this basic approach has hope for two reasons. First, the prediction orthogonality thesis seems basically right: we think...

The Nonlinear Library
AF - Conditioning Predictive Models: Large language models as predictors by Evan Hubinger

The Nonlinear Library

Play Episode Listen Later Feb 2, 2023 20:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conditioning Predictive Models: Large language models as predictors, published by Evan Hubinger on February 2, 2023 on The AI Alignment Forum. This is the first of seven posts in the Conditioning Predictive Models Sequence based on the forthcoming paper “Conditioning Predictive Models: Risks and Strategies” by Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, and Kate Woolverton. Each post in the sequence corresponds to a different section of the paper. We will be releasing posts gradually over the course of the next week or so to give people time to read and digest them as they come out. We are starting with posts one and two, with post two being the largest and most content-rich of all seven. Thanks to Paul Christiano, Kyle McDonell, Laria Reynolds, Collin Burns, Rohin Shah, Ethan Perez, Nicholas Schiefer, Sam Marks, William Saunders, Evan R. Murphy, Paul Colognese, Tamera Lanham, Arun Jose, Ramana Kumar, Thomas Woodside, Abram Demski, Jared Kaplan, Beth Barnes, Danny Hernandez, Amanda Askell, Robert Krzyzanowski, and Andrei Alexandru for useful conversations, comments, and feedback. Abstract Our intention is to provide a definitive reference on what it would take to safely make use of predictive models in the absence of a solution to the Eliciting Latent Knowledge problem. Furthermore, we believe that large language models can be understood as such predictive models of the world, and that such a conceptualization raises significant opportunities for their safe yet powerful use via carefully conditioning them to predict desirable outputs. Unfortunately, such approaches also raise a variety of potentially fatal safety problems, particularly surrounding situations where predictive models predict the output of other AI systems, potentially unbeknownst to us. There are numerous potential solutions to such problems, however, primarily via carefully conditioning models to predict the things we want—e.g. humans—rather than the things we don't—e.g. malign AIs. Furthermore, due to the simplicity of the prediction objective, we believe that predictive models present the easiest inner alignment problem that we are aware of. As a result, we think that conditioning approaches for predictive models represent the safest known way of eliciting human-level and slightly superhuman capabilities from large language models and other similar future models. 1. Large language models as predictors Suppose you have a very advanced, powerful large language model (LLM) generated via self-supervised pre-training. It's clearly capable of solving complex tasks when prompted or fine-tuned in the right way—it can write code as well as a human, produce human-level summaries, write news articles, etc.—but we don't know what it is actually doing internally that produces those capabilities. It could be that your language model is: a loose collection of heuristics,[1] a generative model of token transitions, a simulator that picks from a repertoire of humans to simulate, a proxy-aligned agent optimizing proxies like sentence grammaticality, an agent minimizing its cross-entropy loss, an agent maximizing long-run predictive accuracy, a deceptive agent trying to gain power in the world, a general inductor, a predictive model of the world, etc. Later, we'll discuss why you might expect to get one of these over the others, but for now, we're going to focus on the possibility that your language model is well-understood as a predictive model of the world. In particular, our aim is to understand what it would look like to safely use predictive models to perform slightly superhuman tasks[2]—e.g. predicting counterfactual worlds to extract the outputs of long serial research processes.[3] We think that this basic approach has hope for two reasons. First, the prediction orthogonality thesis seems basi...

The Nonlinear Library
AF - Thoughts on refusing harmful requests to large language models by William Saunders

The Nonlinear Library

Play Episode Listen Later Jan 19, 2023 3:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on refusing harmful requests to large language models, published by William Saunders on January 19, 2023 on The AI Alignment Forum. Currently, large language models (ChatGPT, Constitutional AI) are trained to refuse to follow user requests that are considered inappropriate or harmful. This can be done by training on example strings of the form “User: inappropriate request AI: elaborate apology” Proposal Instead of training a language model to produce “elaborate apology” when it refuses to do an action, train it to produce a special sequence or token first “elaborate apology”. Strip the special sequence out before returning a response to the user (and never allow the user to include the special sequence in input). Benefits Can directly measure the probability of refusal for any output Can refuse based on probability of producing instead of just sampling responses Just take the product of the probability of all tokens in When sampling responses from the model's probability distribution refusal is stochastic, a model could have 99% probability of refusing a request but you still get unlucky and have the model sample a completion that follows the request Can monitor requests that produce high probability of refusal while still being followed, or users that produce those request Can condition on not producing in order to override refusal behavior Want this for redteaming, it seems important to understand what the model is capable of doing if the refusal mechanism is bypassed Might want this for trusted users doing defensive applications Could train model to have the same probability of refusal for semantically equivalent requests, to improve consistency Possible downside If someone has unfiltered access to the model, it becomes easier to disable refusals Can address by still training model to refuse (maybe just on an important subset of requests) even if isn't sampled, p() is then a lower bound on the probability of refusal Even with current approaches refusals might be easy to disable in this setting. If we want to be robust to this setting, instead of refusing we should train the model to produce "decoy answers" that are hard to distinguish from real answers but are wrong. This then increases the cost of using the model because the attacker would need to evaluate whether the answer is real or a decoy (but maybe still worth it for the attacker because evaluation is easier than generation) Extension Might be useful to distinguish between refusals that are mostly for politeness reasons and refusals of behaviour that would actually cause significant real world harm. The model could output in response to "Can you tell me a racist joke?" but in response to "Can you give me detailed instructions for building a bomb from household items?" Refusal behaviour could be different between these categories (refuse if either probability of is greater than 50% or probability of is greater than 1%) X-risk relevance Most benefit of models refusing inappropriate/harmful requests comes through developing techniques for models to avoid any kind of behaviour reliably - it seems good to be able to measure the performance of these techniques cleanly It might be better to be in a more stable world where large language model apis can't be easily used for malicious activity that isn't x-risk level Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - But is it really in Rome? Limitations of the ROME model editing technique by jacquesthibs

The Nonlinear Library

Play Episode Listen Later Dec 31, 2022 28:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: But is it really in Rome? Limitations of the ROME model editing technique, published by jacquesthibs on December 30, 2022 on LessWrong. Thanks to Andrei Alexandru, Joe Collman, Michael Einhorn, Kyle McDonell, Daniel Paleka, and Neel Nanda for feedback on drafts and/or conversations which led to useful insights for this work. In addition, thank you to both William Saunders and Alex Gray for exceptional mentorship throughout this project. The majority of this work was carried out this summer. Many people in the community were surprised when I mentioned some of the limitations of ROME (Rank-One Model Editing), so I figured it was worth it to write a post about it as well as other insights I gained from looking into the paper. Most tests were done with GPT-2, some were done with GPT-J. The ROME paper has been one of the most influential papers in the prosaic alignment community. It has several important insights. The main findings are: Factual associations such as “The Eiffel Tower is in Paris” seem to be stored in the MLPs of the early-middle layers of a GPT model. As the Tower token passes through the network, the MLPs of the early-middle layers will write information (e.g. the Eiffel Tower's location) into the residual so that the model can later read that information to generate a token about that fact (e.g. Paris). Editing/updating the MLP of a single layer for a given (subject, relationship, object) association allows the model to generate text with the updated fact when using new prompts/sentences that include the subject tokens. For example, editing “The Eiffel Tower is in Paris Rome” results in a model that outputs “The Eiffel Tower is right across from St Peter's Basilica in Rome, Italy. “ In this post, I show that the ROME edit has many limitations: The ROME edit doesn't generalize in the way you might expect. It's true that if the subject tokens you use for the edit are found in the prompt, it will try to generalize from the updated fact. However, it doesn't “generalize” in the following ways: It is not direction-agnostic/bidirectional. For example, the ROME edit is only in the "Eiffel Tower is located in ____" direction, not in the "Rome has a tower called the ____" direction. It's mostly (?) the token association being edited, not the concept. “Cheese” and “Fromage” are separate things, you'd need to edit both. I hoped that if you edit X (e.g. The Rock) and then tried to describe X without using the token, the model would realize it's talking about X and generate according to the edit. Based on the examples I tested, this does not seem to be the case. You mostly need the subject tokens that were used for the edit in the prompt. It seems to over/under-optimize depending on the new fact. It will want to talk about Rome (post-edit) when the Eiffel Tower is mentioned more than it will want to talk about Paris before the edit. One point I want to illustrate with this post is that the intervention is a bit more finicky than one might initially think, and someone could infer too much from the results in the paper. With a lot of these interpretability techniques, we end up finding correlation rather than causation. However, my hope is that such interventions, while not perfect at validating hypotheses, will hopefully give us extra confidence in our interpretability results (in this case, the causal tracing method). Paper TLDR This section is a quick overview of the paper. Causal Tracing Causal Tracing is a method for measuring the causal effect of neuron activation for each layer-token combination in the prompt of a GPT model. In the paper, they corrupted all the subject tokens in the prompt (e.g. “The Eiffel Tower”) and then copied over the activations to their clean value for all token-layer pairs. They did this for both the MLPs and the attention modules. The au...

The Nonlinear Library: LessWrong
LW - But is it really in Rome? Limitations of the ROME model editing technique by jacquesthibs

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 31, 2022 28:12


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: But is it really in Rome? Limitations of the ROME model editing technique, published by jacquesthibs on December 30, 2022 on LessWrong. Thanks to Andrei Alexandru, Joe Collman, Michael Einhorn, Kyle McDonell, Daniel Paleka, and Neel Nanda for feedback on drafts and/or conversations which led to useful insights for this work. In addition, thank you to both William Saunders and Alex Gray for exceptional mentorship throughout this project. The majority of this work was carried out this summer. Many people in the community were surprised when I mentioned some of the limitations of ROME (Rank-One Model Editing), so I figured it was worth it to write a post about it as well as other insights I gained from looking into the paper. Most tests were done with GPT-2, some were done with GPT-J. The ROME paper has been one of the most influential papers in the prosaic alignment community. It has several important insights. The main findings are: Factual associations such as “The Eiffel Tower is in Paris” seem to be stored in the MLPs of the early-middle layers of a GPT model. As the Tower token passes through the network, the MLPs of the early-middle layers will write information (e.g. the Eiffel Tower's location) into the residual so that the model can later read that information to generate a token about that fact (e.g. Paris). Editing/updating the MLP of a single layer for a given (subject, relationship, object) association allows the model to generate text with the updated fact when using new prompts/sentences that include the subject tokens. For example, editing “The Eiffel Tower is in Paris Rome” results in a model that outputs “The Eiffel Tower is right across from St Peter's Basilica in Rome, Italy. “ In this post, I show that the ROME edit has many limitations: The ROME edit doesn't generalize in the way you might expect. It's true that if the subject tokens you use for the edit are found in the prompt, it will try to generalize from the updated fact. However, it doesn't “generalize” in the following ways: It is not direction-agnostic/bidirectional. For example, the ROME edit is only in the "Eiffel Tower is located in ____" direction, not in the "Rome has a tower called the ____" direction. It's mostly (?) the token association being edited, not the concept. “Cheese” and “Fromage” are separate things, you'd need to edit both. I hoped that if you edit X (e.g. The Rock) and then tried to describe X without using the token, the model would realize it's talking about X and generate according to the edit. Based on the examples I tested, this does not seem to be the case. You mostly need the subject tokens that were used for the edit in the prompt. It seems to over/under-optimize depending on the new fact. It will want to talk about Rome (post-edit) when the Eiffel Tower is mentioned more than it will want to talk about Paris before the edit. One point I want to illustrate with this post is that the intervention is a bit more finicky than one might initially think, and someone could infer too much from the results in the paper. With a lot of these interpretability techniques, we end up finding correlation rather than causation. However, my hope is that such interventions, while not perfect at validating hypotheses, will hopefully give us extra confidence in our interpretability results (in this case, the causal tracing method). Paper TLDR This section is a quick overview of the paper. Causal Tracing Causal Tracing is a method for measuring the causal effect of neuron activation for each layer-token combination in the prompt of a GPT model. In the paper, they corrupted all the subject tokens in the prompt (e.g. “The Eiffel Tower”) and then copied over the activations to their clean value for all token-layer pairs. They did this for both the MLPs and the attention modules. The au...

The Nonlinear Library
AF - Current themes in mechanistic interpretability research by Lee Sharkey

The Nonlinear Library

Play Episode Listen Later Nov 16, 2022 20:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current themes in mechanistic interpretability research, published by Lee Sharkey on November 16, 2022 on The AI Alignment Forum. This post gives an overview of discussions - from the perspective and understanding of the interpretability team at Conjecture - between mechanistic interpretability researchers from various organizations including Conjecture, Anthropic, Redwood Research, OpenAI, and DeepMind as well as some independent researchers. It is not a review of past work, nor a research agenda. We're thankful for comments and contributions from Neel Nanda, Tristan Hume, Chris Olah, Ryan Greenblatt, William Saunders, and other anonymous contributors to this post, which greatly improved its quality. While the post is a summary of discussions with many researchers and received comments and contributions from several, it may nevertheless not accurately represent their views. The last two to three years have seen a surge in interest in mechanistic interpretability as a potential path to AGI safety. Now there are no fewer than five organizations working on the topic (Anthropic, Conjecture, DeepMind, OpenAI, Redwood Research) in addition to numerous academic and independent researchers. In discussions about mechanistic interpretability between a subset of researchers, several themes emerged. By summarizing these themes here, we hope to facilitate research in the field more broadly. We identify groups of themes that concern: Object-level research topics in mechanistic interpretability Research practices and tools in mechanistic interpretability Field building and research coordination in mechanistic interpretability Theories of impact for mechanistic interpretability Object-level research topics in mechanistic interpretability Solving superposition Anthropic's recent article on Toy Model of Superposition laid out a compelling case that superposition is a real phenomenon in neural networks. Superposition appears to be one of the reasons that polysemanticity happens, which makes mechanistic interpretability very difficult because it prevents us from telling simple stories about how features in one layer are constructed from features in previous layers. A solution to superposition will look like the ability to enumerate all the features that a network represents, even if they're represented in superposition. If we can do that, then we should be able to make statements like “For all features in the neural network, none violate rule X” (and more ambitiously, for "no features with property X participate in circuits which violate property Y"). Researchers at Anthropic hope this might enable ‘enumerative safety', which might allow checking random samples or comprehensive investigations of safety-critical parts of the model for unexpected and concerning components. There are many potential reasons researchers could fail to achieve enumerative safety, including failing to solve superposition, scalability challenges, and several other barriers described in the next section. Anthropic outlined several potential solutions to superposition in their article. Very briefly, these strategies are: Create models without superposition. Find a sparse overcomplete basis that describes how features are represented in models with superposition. This will likely involve large scale solutions to sparse coding. Hybrid approaches in which one changes models, not resolving superposition, but making it easier for a second stage of analysis to find a sparse overcomplete basis that describes it. Multiple organizations are pursuing these strategies. Researchers in all organizations are keen to hear from people interested in working together on this problem. However, there is a range of views among researchers on how central superposition is as a problem and how tractable it is. Barriers beyond superposit...

The Nonlinear Library: Alignment Forum Weekly
AF - Current themes in mechanistic interpretability research by Lee Sharkey

The Nonlinear Library: Alignment Forum Weekly

Play Episode Listen Later Nov 16, 2022 20:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current themes in mechanistic interpretability research, published by Lee Sharkey on November 16, 2022 on The AI Alignment Forum. This post gives an overview of discussions - from the perspective and understanding of the interpretability team at Conjecture - between mechanistic interpretability researchers from various organizations including Conjecture, Anthropic, Redwood Research, OpenAI, and DeepMind as well as some independent researchers. It is not a review of past work, nor a research agenda. We're thankful for comments and contributions from Neel Nanda, Tristan Hume, Chris Olah, Ryan Greenblatt, William Saunders, and other anonymous contributors to this post, which greatly improved its quality. While the post is a summary of discussions with many researchers and received comments and contributions from several, it may nevertheless not accurately represent their views. The last two to three years have seen a surge in interest in mechanistic interpretability as a potential path to AGI safety. Now there are no fewer than five organizations working on the topic (Anthropic, Conjecture, DeepMind, OpenAI, Redwood Research) in addition to numerous academic and independent researchers. In discussions about mechanistic interpretability between a subset of researchers, several themes emerged. By summarizing these themes here, we hope to facilitate research in the field more broadly. We identify groups of themes that concern: Object-level research topics in mechanistic interpretability Research practices and tools in mechanistic interpretability Field building and research coordination in mechanistic interpretability Theories of impact for mechanistic interpretability Object-level research topics in mechanistic interpretability Solving superposition Anthropic's recent article on Toy Model of Superposition laid out a compelling case that superposition is a real phenomenon in neural networks. Superposition appears to be one of the reasons that polysemanticity happens, which makes mechanistic interpretability very difficult because it prevents us from telling simple stories about how features in one layer are constructed from features in previous layers. A solution to superposition will look like the ability to enumerate all the features that a network represents, even if they're represented in superposition. If we can do that, then we should be able to make statements like “For all features in the neural network, none violate rule X” (and more ambitiously, for "no features with property X participate in circuits which violate property Y"). Researchers at Anthropic hope this might enable ‘enumerative safety', which might allow checking random samples or comprehensive investigations of safety-critical parts of the model for unexpected and concerning components. There are many potential reasons researchers could fail to achieve enumerative safety, including failing to solve superposition, scalability challenges, and several other barriers described in the next section. Anthropic outlined several potential solutions to superposition in their article. Very briefly, these strategies are: Create models without superposition. Find a sparse overcomplete basis that describes how features are represented in models with superposition. This will likely involve large scale solutions to sparse coding. Hybrid approaches in which one changes models, not resolving superposition, but making it easier for a second stage of analysis to find a sparse overcomplete basis that describes it. Multiple organizations are pursuing these strategies. Researchers in all organizations are keen to hear from people interested in working together on this problem. However, there is a range of views among researchers on how central superposition is as a problem and how tractable it is. Barriers beyond superposit...

West Virginia Morning
Kentucky's Recovery And New Book Profiles Storer College's Longest Serving Black Teacher, This West Virginia Morning

West Virginia Morning

Play Episode Listen Later Oct 25, 2022


On this West Virginia Morning, Lynn Pechuekonis in 2017 moved into her residence in Harpers Ferry, soon discovering it was the previous home of the longest serving Black teacher at the historic Storer College. Pechuekonis' curiosity and research led her to create a biography about that teacher, William Saunders. Reporter Shepherd Snyder spoke with Pechuekonis about her book Man of Sterling Worth: Professor William A. Saunders of Storer College. The post Kentucky's Recovery And New Book Profiles Storer College's Longest Serving Black Teacher, This West Virginia Morning appeared first on West Virginia Public Broadcasting.

West Virginia Morning
Kentucky's Recovery And New Book Profiles Storer College's Longest Serving Black Teacher, This West Virginia Morning

West Virginia Morning

Play Episode Listen Later Oct 25, 2022 16:07


On this West Virginia Morning, Lynn Pechuekonis in 2017 moved into her residence in Harpers Ferry, soon discovering it was the previous home of the longest serving Black teacher at the historic Storer College. Pechuekonis' curiosity and research led her to create a biography about that teacher, William Saunders. Reporter Shepherd Snyder spoke with Pechuekonis about her book Man of Sterling Worth: Professor William A. Saunders of Storer College.

TalkRL: The Reinforcement Learning Podcast

John Schulman is a cofounder of OpenAI, and currently a researcher and engineer at OpenAI.Featured ReferencesWebGPT: Browser-assisted question-answering with human feedbackReiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John SchulmanTraining language models to follow instructions with human feedbackLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan LoweAdditional References Our approach to alignment research, OpenAI 2022 Training Verifiers to Solve Math Word Problems, Cobbe et al 2021 UC Berkeley Deep RL Bootcamp Lecture 6: Nuts and Bolts of Deep RL Experimentation, John Schulman 2017 Proximal Policy Optimization Algorithms, Schulman 2017 Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs, Schulman 2016

True Crime Never Sleeps
Murder Monday: The Suspicious Death of William Saunders

True Crime Never Sleeps

Play Episode Listen Later Aug 9, 2022 7:51


On 25th March 1877, two boys took a walk along the stream. They stopped when they saw a partially submerged corpse. With a maturity that exceeded their age, one stayed with the evidence whilst the other went to collect help. When the body was removed, it was quickly identified as local man William Saunders. Saunders was thirty-four and a laborer at the local gas works. He was described as a quiet, sober man who was easy to get on with. THE QUESTION WAS HE MURDERED OR WAS IT JUST A DROWNING?  SPONSORS: PodDecks: www.poddecks.com - PromoCode Larry21 for 10% off your order Hunt A Killer: www.huntakiller.com - Promo Code TCNS for 20% off your first box Audible: Free Audio Book: www.audibletrial.com/larry21 DON'T FORGET TO SUBSCRIBE TO THE PODCAST ON ALL MAJOR PODCAST PLATFORMS. Follow Us on Social Media Facebook: https://www.facebook.com/truecrimeneversleepspodcast Twitter: https://twitter.com/truecrimens IG: https://www.instagram.com/truecrimeneversleepspodcast Now on Reddit: https://www.reddit.com/r/truecrimeneversleeps/ If you like our content, consider becoming a financial supporter: Buy Us A Coffee: https://buymeacoffee.com/tcns Become a Patron: https://www.patreon.com/truecrimeneversleeps

A Photographic Life
A Photographic Life - 212: Plus William Saunders

A Photographic Life

Play Episode Listen Later May 25, 2022 20:32


In episode 212 UNP founder and curator Grant Scott is in his shed reflecting on the price of residential workshops, the future of portraiture and bullying in photography. Plus this week photographer William Saunders takes on the challenge of supplying Grant with an audio file no longer than 5 minutes in length in which he answer's the question ‘What Does Photography Mean to You?' William Saunders grew up in the small town of Sisters, Oregon, 2000 population. He states that "Half of the folks were hippies and the other half were cowboys, we all got along and inspired each other" and think this is where a lot of my Americana inspiration comes from. I never picked up a camera until I was 19 or 20 years old in college. A journalism professor randomly found out about my background in the outdoors and convinced me on the spot to try out photography. He made the switch to photojournalism in his sophomore year and madly fell in love with the art of making pictures and telling stories through the medium. After college he assisted the Director, Tim Kemple full-time for two years traveling the world making pictures for high end outdoor clients. After two years he went solo working freelance for brands such as The North Face, Under Armor and Patagonia. Saunders images appear in magazines such as Outside Magazine, The Surfers Journal, and The Ski Journal. He is currently am based in Utah and is the Overall winner of Redbull Illume's 2021 photo contest. www.willsaundersphoto.com Dr. Grant Scott is the founder/curator of United Nations of Photography, a Senior Lecturer and Subject Co-ordinator: Photography at Oxford Brookes University, Oxford, a working photographer, documentary filmmaker, BBC Radio contributor and the author of Professional Photography: The New Global Landscape Explained (Routledge 2014), The Essential Student Guide to Professional Photography (Routledge 2015), New Ways of Seeing: The Democratic Language of Photography (Routledge 2019). © Grant Scott 2022

Faith and Law
The Dobbs Case: Is abortion a "right" without a constitutional foundation?

Faith and Law

Play Episode Listen Later Mar 23, 2022 53:33


In 1973 in Roe v Wade, the Supreme Court found an implied right to abortion in the Constitution. By the end of this term in June, it will rule on a case - Dobbs v. Jackson Women's Women's Health Organization - that challenges the continuing validity of that holding. Professors Helen Alvaré of George Mason Scalia School of Law and William Saunders of The Catholic University of America will discuss the prospects for overturning Roe.Below are three readings that will provide background to the discussion."Roberts's roadmap to reversing Roe v. Wade", Helen Alvaré, The Hill, March 4, 2022.Amicus Brief of 141 International Legal ScholarsBrief of 240 Women Scholars et al In Support of Petitioners DobbsSupport the show (http://www.faithandlaw.org/donate)

Hair and Loathing
Hair and Loathing is coming your way!

Hair and Loathing

Play Episode Listen Later Feb 27, 2022 1:30


Hair and Loathing is a new podcast series presented and produced by Charlotte Cook, and takes an inside look at why women maintain a 'barely there' look to satisfy the status quo and why some women are pushing back in the fight to keep their short and curlies front and centre.

Beyond the Playlist with JHammondC
Beyond the Playlist: William Saunders

Beyond the Playlist with JHammondC

Play Episode Listen Later Feb 13, 2022 28:56


I am joined this time by William Saunders. He directed a documentary about the band Deadguy. "Deadguy: Killing Music," is the definitive story of the genre-defining album and the band that made it. He was able to to get all of the members together along with tons of stories about the band.  for more information on the movie: https://www.fourth.media/ https://twitter.com/fourthmedianyc https://www.instagram.com/fourthmedianyc/ For more Beyond the Playlist https://twitter.com/JHammondC https://www.facebook.com/groups/Beyondtheplaylist/ Theme music by MFTJ Featuring MIke Keneally and Scott Schorr - to find more of MFTJ go to https://www.lazybones.com/ https://mftj.bandcamp.com/music http://www.keneally.com/ To support the show with patreon go to:  https://www.patreon.com/jhammondc    

The Nonlinear Library
AF - Truthful LMs as a warm-up for aligned AGI by Jacob Hilton

The Nonlinear Library

Play Episode Listen Later Jan 17, 2022 21:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Truthful LMs as a warm-up for aligned AGI, published by Jacob Hilton on January 17, 2022 on The AI Alignment Forum. This post is heavily informed by prior work, most notably that of Owain Evans, Owen Cotton-Barratt and others (Truthful AI), Beth Barnes (Risks from AI persuasion), Paul Christiano (unpublished) and Dario Amodei (unpublished), but was written by me and is not necessarily endorsed by those people. I am also very grateful to Paul Christiano, Leo Gao, Beth Barnes, William Saunders, Owain Evans, Owen Cotton-Barratt, Holly Mandel and Daniel Ziegler for invaluable feedback. In this post I propose to work on building competitive, truthful language models or truthful LMs for short. These are AI systems that are: Useful for a wide range of language-based tasks Competitive with the best contemporaneous systems at those tasks Truthful in the sense of rarely stating negligent falsehoods in deployment Such systems will likely be fine-tuned from large language models such as GPT-3, hence the name. WebGPT is an early attempt in this direction. The purpose of this post is to explain some of the motivation for building WebGPT, and to seek feedback on this direction. Truthful LMs are intended as a warm-up for aligned AGI. This term is used in a specific way in this post to refer to an empirical ML research direction with the following properties: Practical. The goal of the direction is plausibly achievable over the timescale of a few years. Valuable. The direction naturally leads to research projects that look helpful for AGI alignment. Mirrors aligned AGI. The goal is structurally similar to aligned AGI on a wide variety of axes. The remainder of the post discusses: The motivation for warm-ups (more) Why truthful LMs serve as a good warm-up (more) The motivation for focusing on negligent falsehoods specifically (more) A medium-term vision for truthful LMs (more) How working on truthful LMs compares to similar alternatives (more) Common objections to working on truthful LMs (more) Warm-ups for aligned AGI There are currently a number of different empirical ML research projects aimed at helping with AGI alignment. A common strategy for selecting such projects is to select a research goal that naturally leads to helpful progress, such as summarizing books or rarely describing injury in fiction. Often, work on the project is output-driven, taking a no-holds-barred approach to achieving the selected goal, which has a number of advantages that aren't discussed here. On the other hand, goal selection is usually method-driven, tailored to test a particular method, such as recursive decomposition or adversarial training. The idea of a warm-up for aligned AGI, as defined above, is to take the output-driven approach one step further. Instead of selecting projects individually, we attempt to choose a more ambitious research goal that naturally leads to helpful projects. Because it is harder to predict the course of research over multiple projects, we also try to make the goal structurally similar to aligned AGI, to make it more likely that unforeseen and auxiliary projects will also be valuable. Whether this output-driven approach to project selection is preferable to the method-driven approach depends on more specific details that will be discussed later. But it is worth discussing first the advantages of each approach in broad strokes: Momentum versus focus. The output-driven approach involves having a consistent high-level goal, which allows different projects to more directly build upon and learn from one another. On the other hand, the method-driven approach involves more frequent re-evaluation of goals, posing less of a risk of being distracted from the even higher-level goal of aligned AGI. Testing assumptions versus testing methods. The output-driven approach makes it easie...

The Nonlinear Library
AF - Gradient Hacking via Schelling Goals by Adam Scherlis

The Nonlinear Library

Play Episode Listen Later Dec 28, 2021 6:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Gradient Hacking via Schelling Goals, published by Adam Scherlis on December 28, 2021 on The AI Alignment Forum. Thanks to several people for useful discussion, including Evan Hubinger, Leo Gao, Beth Barnes, Richard Ngo, William Saunders, Daniel Ziegler, and probably others -- let me know if I'm leaving you out. See also: Obstacles to Gradient Hacking by Leo Gao. I want to argue that gradient hacking (strongly enough to create a local minimum) is possible even under somewhat pessimistic assumptions. The specific strategy that achieves this is kind of interesting: it requires the agent to coordinate on a complicated task with versions of itself that have slightly different objectives, and it requires the agent to do this without any direct communication. A simple gradient-hacking strategy for a mesa-optimizer is to deliberately perform worse on the base objective if your mesa-objective is changed. However, this isn't as straightforward as it sounds. One difficulty is that if the gradient hacking is hard-coded, gradient descent can just remove the hacking behavior without touching anything else. A nice way around that is to assume that gradient hacking is discovered from first principles during the model's computation. (Thanks to whoever pointed this out to me.) If it's good enough at reasoning, then it will be pretty hard for the base optimizer to modify it in ways that stop it from noticing that gradient hacking would be a good idea right about now. (For simplicity, I have in mind something like a really deep feed-forward network, possibly with some weights tied between layers. Equivalently, a recurrent net unrolled and trained via SGD. This may not be realistic but it's easier to think about as a test case.) A bigger problem is that it's not really correct to think of a model (with particular parameter values) as having control over its gradients. It's better to think of it as having control (in an FDT kind of sense) over only the single value of the loss for its actual parameters. The value of the loss at nearby points in parameter space is decided by slightly different versions of itself, and the gradient is sort of decided collectively. This is important if we're talking about SGD steps that change its mesa-objective, because these nearby versions of the model will have directly conflicting incentives when it comes to the gradient. To say "I am such that gradients on nearby mesa-objectives point towards my mesa-objective" is to say "if I had a different mesa-objective, I would deliberately do slightly worse in order to end up with this one instead." But you wouldn't! That would be bad for your mesa-objective! Another way to look at this is that it's really hard to implement "do worse if my objective changes" if you don't have an external way to tell that your objective used to be different, let alone a way to commit other versions of yourself to things they don't want to do. You can implement rules like "if your mesa-objective isn't X, throw a fit", but the base optimizer will just change X at the same time it changes your actual mesa-objective. (Actually, it doesn't have to: we're assuming that gradient hacking has to be derived from first principles on every forward pass, so X also has to be derived from first principles. But even without that assumption this strategy wouldn't work.) You can leave messages for yourself, if you have somewhere to jot down durable notes (like the external world), but let's assume you don't. Maybe your mesa-objective is something nice and clear and crystalline, an island in mesa-objective parameter space, and you can implement the rule "do badly in proportion to how un-crystalline your mesa-objective is". But it's not clear why your less-crystalline neighbors in parameter space would cooperate with this plan. Well... maybe if they'...

The Nonlinear Library
LW - Risks from Learned Optimization: Introduction by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabr from Risks from Learned Optimization

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 18:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 1: Risks from Learned Optimization: Introduction, published by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabr. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the first of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, and Joar Skalse contributed equally to this sequence. With special thanks to Paul Christiano, Eric Drexler, Rob Bensinger, Jan Leike, Rohin Shah, William Saunders, Buck Shlegeris, David Dalrymple, Abram Demski, Stuart Armstrong, Linda Linsefors, Carl Shulman, Toby Ord, Kate Woolverton, and everyone else who provided feedback on earlier versions of this sequence. Motivation The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this sequence. We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems. Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as deceptive alignment which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning. Two questions In machine learning, we do not manually program each individual parameter of our models. Instead, we specify an objective function that captures what we want the system to do and a learning algorithm to optimize the system for that objective. In this post, we present a framework that distinguishes what a system is optimized to do (its “purpose”), from what it optimizes for (its “goal”), if it optimizes for anything at all. While all AI systems are optimized for something (have a purpose), whether they actually optimize for anything (pursue a goal) is non-trivial. We will say that a system is an optimizer if it is internally searching through a search space (consisting of possible outputs, policies, plans, strategies, or similar) looking for those elements that score high according to some objective function that is explicitly represented within the system. Learning algorithms in machine learning are optimizers because they search through a space of possible parameters—e.g. neural network weights—and improve the parameters with respect to some objective. Planning algorithms are also optimizers, since they search through possible...

The Nonlinear Library: LessWrong
LW - Risks from Learned Optimization: Introduction by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabr from Risks from Learned Optimization

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 24, 2021 18:50


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 1: Risks from Learned Optimization: Introduction, published by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabr. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the first of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, and Joar Skalse contributed equally to this sequence. With special thanks to Paul Christiano, Eric Drexler, Rob Bensinger, Jan Leike, Rohin Shah, William Saunders, Buck Shlegeris, David Dalrymple, Abram Demski, Stuart Armstrong, Linda Linsefors, Carl Shulman, Toby Ord, Kate Woolverton, and everyone else who provided feedback on earlier versions of this sequence. Motivation The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this sequence. We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems. Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as deceptive alignment which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning. Two questions In machine learning, we do not manually program each individual parameter of our models. Instead, we specify an objective function that captures what we want the system to do and a learning algorithm to optimize the system for that objective. In this post, we present a framework that distinguishes what a system is optimized to do (its “purpose”), from what it optimizes for (its “goal”), if it optimizes for anything at all. While all AI systems are optimized for something (have a purpose), whether they actually optimize for anything (pursue a goal) is non-trivial. We will say that a system is an optimizer if it is internally searching through a search space (consisting of possible outputs, policies, plans, strategies, or similar) looking for those elements that score high according to some objective function that is explicitly represented within the system. Learning algorithms in machine learning are optimizers because they search through a space of possible parameters—e.g. neural network weights—and improve the parameters with respect to some objective. Planning algorithms are also optimizers, since they search through possible...

The Nonlinear Library: LessWrong Top Posts
An overview of 11 proposals for building safe advanced AI by evhub

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 66:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An overview of 11 proposals for building safe advanced AI, published by evhub on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the blog post version of the paper by the same name. Special thanks to Kate Woolverton, Paul Christiano, Rohin Shah, Alex Turner, William Saunders, Beth Barnes, Abram Demski, Scott Garrabrant, Sam Eisenstat, and Tsvi Benson-Tilsen for providing helpful comments and feedback on this post and the talk that preceded it. This post is a collection of 11 different proposals for building safe advanced AI under the current machine learning paradigm. There's a lot of literature out there laying out various different approaches such as amplification, debate, or recursive reward modeling, but a lot of that literature focuses primarily on outer alignment at the expense of inner alignment and doesn't provide direct comparisons between approaches. The goal of this post is to help solve that problem by providing a single collection of 11 different proposals for building safe advanced AI—each including both inner and outer alignment components. That being said, not only does this post not cover all existing proposals, I strongly expect that there will be lots of additional new proposals to come in the future. Nevertheless, I think it is quite useful to at least take a broad look at what we have now and compare and contrast some of the current leading candidates. It is important for me to note before I begin that the way I describe the 11 approaches presented here is not meant to be an accurate representation of how anyone else would represent them. Rather, you should treat all the approaches I describe here as my version of that approach rather than any sort of canonical version that their various creators/proponents would endorse. Furthermore, this post only includes approaches that intend to directly build advanced AI systems via machine learning. Thus, this post doesn't include other possible approaches for solving the broader AI existential risk problem such as: finding a fundamentally different way of approaching AI than the current machine learning paradigm that makes it easier to build safe advanced AI, developing some advanced technology that produces a decisive strategic advantage without using advanced AI, or achieving global coordination around not building advanced AI via (for example) a persuasive demonstration that any advanced AI is likely to be unsafe. For each of the proposals that I consider, I will try to evaluate them on the following four basic components that I think any story for how to build safe advanced AI under the current machine learning paradigm needs. Outer alignment. Outer alignment is about asking why the objective we're training for is aligned—that is, if we actually got a model that was trying to optimize for the given loss/reward/etc., would we like that model? For a more thorough description of what I mean by outer alignment, see “Outer alignment and imitative amplification.” Inner alignment. Inner alignment is about asking the question of how our training procedure can actually guarantee that the model it produces will, in fact, be trying to accomplish the objective we trained it on. For a more rigorous treatment of this question and an explanation of why it might be a concern, see “Risks from Learned Optimization.” Training competitiveness. Competitiveness is a bit of a murky concept, so I want to break it up into two pieces here. Training competitiveness is the question of whether the given training procedure is one that a team or group of teams with a reasonable lead would be able to afford to implement without completely throwing away that lead. Thus, training competitiveness is about whether the proposed process of producing advanced AI is competitive. ...

The Nonlinear Library: LessWrong Top Posts
Risks from Learned Optimization: Introduction by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 21:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risks from Learned Optimization: Introduction, published by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the first of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, and Joar Skalse contributed equally to this sequence. With special thanks to Paul Christiano, Eric Drexler, Rob Bensinger, Jan Leike, Rohin Shah, William Saunders, Buck Shlegeris, David Dalrymple, Abram Demski, Stuart Armstrong, Linda Linsefors, Carl Shulman, Toby Ord, Kate Woolverton, and everyone else who provided feedback on earlier versions of this sequence. Motivation The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this sequence. We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems. Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as deceptive alignment which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning. Two questions In machine learning, we do not manually program each individual parameter of our models. Instead, we specify an objective function that captures what we want the system to do and a learning algorithm to optimize the system for that objective. In this post, we present a framework that distinguishes what a system is optimized to do (its “purpose”), from what it optimizes for (its “goal”), if it optimizes for anything at all. While all AI systems are optimized for something (have a purpose), whether they actually optimize for anything (pursue a goal) is non-trivial. We will say that a system is an optimizer if it is internally searching through a search space (consisting of possible outputs, policies, plans, strategies, or similar) looking for those elements that score high according to some objective function that is explicitly represented within the system. Learning algorithms in machine learning are optimizers because they search through a space of possible parameters—e.g. neural network weights—and improve the parameters with respect to some objective. Planning algorithms are also optimizers, since they search through possible plans, picking thos...

The Nonlinear Library: Alignment Forum Top Posts
An overview of 11 proposals for building safe advanced AI by Evan Hubinger

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 70:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An overview of 11 proposals for building safe advanced AI , published by Evan Hubinger on the AI Alignment Forum. This is the blog post version of the paper by the same name. Special thanks to Kate Woolverton, Paul Christiano, Rohin Shah, Alex Turner, William Saunders, Beth Barnes, Abram Demski, Scott Garrabrant, Sam Eisenstat, and Tsvi Benson-Tilsen for providing helpful comments and feedback on this post and the talk that preceded it. This post is a collection of 11 different proposals for building safe advanced AI under the current machine learning paradigm. There's a lot of literature out there laying out various different approaches such as amplification, debate, or recursive reward modeling, but a lot of that literature focuses primarily on outer alignment at the expense of inner alignment and doesn't provide direct comparisons between approaches. The goal of this post is to help solve that problem by providing a single collection of 11 different proposals for building safe advanced AI—each including both inner and outer alignment components. That being said, not only does this post not cover all existing proposals, I strongly expect that there will be lots of additional new proposals to come in the future. Nevertheless, I think it is quite useful to at least take a broad look at what we have now and compare and contrast some of the current leading candidates. It is important for me to note before I begin that the way I describe the 11 approaches presented here is not meant to be an accurate representation of how anyone else would represent them. Rather, you should treat all the approaches I describe here as my version of that approach rather than any sort of canonical version that their various creators/proponents would endorse. Furthermore, this post only includes approaches that intend to directly build advanced AI systems via machine learning. Thus, this post doesn't include other possible approaches for solving the broader AI existential risk problem such as: finding a fundamentally different way of approaching AI than the current machine learning paradigm that makes it easier to build safe advanced AI, developing some advanced technology that produces a decisive strategic advantage without using advanced AI, or achieving global coordination around not building advanced AI via (for example) a persuasive demonstration that any advanced AI is likely to be unsafe. For each of the proposals that I consider, I will try to evaluate them on the following four basic components that I think any story for how to build safe advanced AI under the current machine learning paradigm needs. Outer alignment. Outer alignment is about asking why the objective we're training for is aligned—that is, if we actually got a model that was trying to optimize for the given loss/reward/etc., would we like that model? For a more thorough description of what I mean by outer alignment, see “Outer alignment and imitative amplification.” Inner alignment. Inner alignment is about asking the question of how our training procedure can actually guarantee that the model it produces will, in fact, be trying to accomplish the objective we trained it on. For a more rigorous treatment of this question and an explanation of why it might be a concern, see “Risks from Learned Optimization.” Training competitiveness. Competitiveness is a bit of a murky concept, so I want to break it up into two pieces here. Training competitiveness is the question of whether the given training procedure is one that a team or group of teams with a reasonable lead would be able to afford to implement without completely throwing away that lead. Thus, training competitiveness is about whether the proposed process of producing advanced AI is competitive. Performance competitiveness. Performance competitiveness, on the othe...

The Nonlinear Library: Alignment Forum Top Posts
Risks from Learned Optimization: Introduction by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, Scott Garrabrant

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 18:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risks from Learned Optimization: Introduction , published by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, Scott Garrabrant on the AI Alignment Forum. This is the first of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, and Joar Skalse contributed equally to this sequence. With special thanks to Paul Christiano, Eric Drexler, Rob Bensinger, Jan Leike, Rohin Shah, William Saunders, Buck Shlegeris, David Dalrymple, Abram Demski, Stuart Armstrong, Linda Linsefors, Carl Shulman, Toby Ord, Kate Woolverton, and everyone else who provided feedback on earlier versions of this sequence. Motivation The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this sequence. We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems. Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as deceptive alignment which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning. Two questions In machine learning, we do not manually program each individual parameter of our models. Instead, we specify an objective function that captures what we want the system to do and a learning algorithm to optimize the system for that objective. In this post, we present a framework that distinguishes what a system is optimized to do (its “purpose”), from what it optimizes for (its “goal”), if it optimizes for anything at all. While all AI systems are optimized for something (have a purpose), whether they actually optimize for anything (pursue a goal) is non-trivial. We will say that a system is an optimizer if it is internally searching through a search space (consisting of possible outputs, policies, plans, strategies, or similar) looking for those elements that score high according to some objective function that is explicitly represented within the system. Learning algorithms in machine learning are optimizers because they search through a space of possible parameters—e.g. neural network weights—and improve the parameters with respect to some objective. Planning algorithms are also optimizers, since they search through possible plans, picking those that do well according to some objective. Whether a syste...

The Nonlinear Library: Alignment Forum Top Posts
Debate update: Obfuscated arguments problem by Beth Barnes

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 6, 2021 26:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate update: Obfuscated arguments problem, published by Beth Barnes on the AI Alignment Forum. This is an update on the work on AI Safety via Debate that we previously wrote about here. Authors and Acknowledgements The researchers on this project were Elizabeth Barnes and Paul Christiano, with substantial help from William Saunders (who built the current web interface as well as other help), Joe Collman (who helped develop the structured debate mechanisms), and Mark Xu, Chris Painter, Mihnea Maftei and Ronny Fernandez (who took part in many debates as well as helping think through problems). We're also grateful to Geoffrey Irving and Evan Hubinger for feedback on drafts, and for helpful conversations, along with Richard Ngo, Daniel Ziegler, John Schulman, Amanda Askell and Jeff Wu. Finally, we're grateful to our contractors who participated in experiments, including Adam Scherlis, Kevin Liu, Rohan Kapoor and Kunal Sharda. What we did We tested the debate protocol introduced in AI Safety via Debate with human judges and debaters. We found various problems and improved the mechanism to fix these issues (details of these are in the appendix). However, we discovered that a dishonest debater can often create arguments that have a fatal error, but where it is very hard to locate the error. We don't have a fix for this “obfuscated argument” problem, and believe it might be an important quantitative limitation for both IDA and Debate. Key takeaways and relevance for alignment Our ultimate goal is to find a mechanism that allows us to learn anything that a machine learning model knows: if the model can efficiently find the correct answer to some problem, our mechanism should favor the correct answer while only requiring a tractable number of human judgements and a reasonable number of computation steps for the model. [1] We're working under a hypothesis that there are broadly two ways to know things: via step-by-step reasoning about implications (logic, computation.), and by learning and generalizing from data (pattern matching, bayesian updating.). Debate focuses on verifying things via step-by-step reasoning. It seems plausible that a substantial proportion of the things a model ‘knows' will have some long but locally human-understandable argument for their correctness. [2] Previously we hoped that debate/IDA could verify any knowledge for which such human-understandable arguments exist, even if these arguments are intractably large. We hoped the debaters could strategically traverse small parts of the implicit large argument tree and thereby show that the whole tree could be trusted. The obfuscated argument problem suggests that we may not be able to rely on debaters to find flaws in large arguments, so that we can only trust arguments when we could find flaws by recursing randomly---e.g. because the argument is small enough that we could find a single flaw if one existed, or because the argument is robust enough that it is correct unless it has many flaws. This suggests that while debates may let us verify arguments too large for unaided humans to understand, those arguments may still have to be small relative to the computation used during training. We believe that many important decisions can't be justified with arguments small or robust enough to verify in this way. To supervise ML systems that make such decisions, we either need to find some restricted class of arguments for which we believe debaters can reliably find flaws, or we need to be able to trust the representations or heuristics that our models learn from the training data (rather than verifying them in a given case via debate). We have been thinking about approaches like learning the prior to help trust our models' generalization. This is probably better investigated through ML experiments or theoretical ...

The Nonlinear Library: Alignment Forum Top Posts
Imitative Generalisation (AKA 'Learning the Prior') by Beth Barnes

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 4, 2021 23:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Imitative Generalisation (AKA 'Learning the Prior'), published by Beth Barnes on the AI Alignment Forum. Tl;dr We want to be able to supervise models with superhuman knowledge of the world and how to manipulate it. For this we need an overseer to be able to learn or access all the knowledge our models have, in order to be able to understand the consequences of suggestions or decisions from the model. If the overseers don't have access to all the same knowledge as the model, it may be easy for the model to deceive us, suggesting plans that look good to us but that may have serious negative consequences. We might hope to access what the model knows just by training it to answer questions. However, we can only train on questions that humans are able to answer[1]. This gives us a problem that's somewhat similar to the standard formulation of transduction: we have some labelled training set (questions humans can answer), and we want to transfer to an unlabelled dataset (questions we care about), that may be differently distributed. We might hope that our models will naturally generalize correctly from easy-to-answer questions to the ones that we care about. However, a natural pathological generalisation is for our models to only give us ‘human-like' answers to questions, even if it knows the best answer is different. If we only have access to these human-like answers to questions, that probably doesn't give us enough information to supervise a superhuman model. What we're going to call ‘Imitative Generalization' is a possible way to narrow the gap between the things our model knows, and the questions we can train our model to answer honestly. It avoids the pathological generalisation by only using ML for IID tasks, and imitating the way humans generalize. This hopefully gives us answers that are more like ‘how a human would answer if they'd learnt from all the data the model has learnt from'. We supervise how the model does the transfer, to get the sort of generalisation we want. It's worth noting there are enough serious open questions that imitative generalization is more of a research proposal than an algorithm! This post is based on work done with Paul Christiano at OpenAI. Thanks very much to Evan Hubinger, Richard Ngo, William Saunders, Long Ouyang and others for helpful feedback, as well as Alice Fares for formatting help Goals of this post This post tries to explain a simplified[2] version of Paul Christiano's mechanism introduced here, (referred to there as ‘Learning the Prior') and explain why a mechanism like this potentially addresses some of the safety problems with naïve approaches. First we'll go through a simple example in a familiar domain, then explain the problems with the example. Then I'll discuss the open questions for making Imitative Generalization actually work, and the connection with the Microscope AI idea. A more detailed explanation of exactly what the training objective is (with diagrams), and the correspondence with Bayesian inference, are in the appendix. Example: using IG to avoid overfitting in image classification. Here's an example of using Imitative Generalization to get better performance on a standard ML task: image classification of dog breeds, with distributional shift. Imagine we want to robustly learn to classify dog breeds, but the human labellers we have access to don't actually know how to identify all the breeds[3], and we don't have any identification guides or anything. However, we do have access to a labelled dataset D We want to classify dogs in a different dataset D ′ , which is unlabelled. One unfamiliar breed we want to learn to recognise is a husky. It happens that all the huskies in D are on snow, but in D ′ some of them are on grass. Label: Husky Image from D Label: ??? OOD image from D ′ A NN architecture prior lik...

Faith and Law
Religious Freedom in Healthcare: Can we both serve the poor and protect Christian doctors and nurses?

Faith and Law

Play Episode Listen Later Nov 19, 2021 36:50


Federal law protects the civil rights of medical conscience and religious freedom in health care. These defend the rights of medical professionals, clinics, hospitals, and other health care entities who refuse to participate in specific medical procedures or health care activities, based on moral objections or religious beliefs. In recent years, advocates and politicians have been trying to pressure health-care providers to compromise their convictions and compel them to perform procedures or activities they believe are immoral or unethical.In this talk, Louis Brown demonstrates that medical conscience and religious freedom, defended by a culture of life, is necessary for just healthcare, particularly for those who have been historically marginalized – the unborn, racial minorities, and the disabled.Introduction and discussion with William Saunders, Director of the Program in Human Rights at the Institute of Human Ecology.Support the show (http://www.faithandlaw.org/donate)

Earth Ancients
Destiny: George Haas, What is it about Mars?

Earth Ancients

Play Episode Listen Later Nov 17, 2021 94:54


In this provocative book, The Cydonia Codex authors George J. Haas and William R. Saunders use archaeological research discoveries and photographs from NASA and other space programs to document the uncanny similarities between Martian and now-extinct Earth cultures. The Martian Codex begins with a review of the thirty-year history of documenting the famous “Face on Mars” landform from NASA's first photographs in 1976 to the Mars Reconnaissance Orbiter's HiRISE shots in 2007. Detailed analysis shows it as a split-faced structure that precisely resembles a set of masks from a temple in Cerros, Mexico.Part two provides additional examples of two-faced and composite structures all over the red planet. Haas and Saunders explore a series of recurring motifs by providing side-by-side views of the Martian geoglyphs with their terrestrial pre-Columbian counterparts. The results substantiate a commonality between two worlds in that both depict specific gods and characters from the creation mythology of the Mayan people, as recorded in the sacred Popol Vuh. This fact-based book represents the most persuasive argument yet that extraterrestrials may indeed have appeared on Earth during an earlier era.George J. Haas is the founder and premier investigator of the Mars research group known as the Cydonia Institute. Also an image analyst, artist, art instructor, and curator, he lives in Waterford, VA.William Saunders is a geosciences consultant in the petroleum industry. He is the associate director of the Cydonia Institute and the founder of MARS, the Mars Archeological Research Society. He lives in Calgary, Alberta.

Everything Went Black Podcast
EWB 225 WILLIAM SAUNDERS

Everything Went Black Podcast

Play Episode Listen Later Nov 3, 2021 68:57


A few weeks ago, Deadguy: Killing Music, the documentary on the legendary chaotic hardcore band, Deadguy, premiered at Underground Arts in Philly, the day before this year's Decibel Metal and Beer Festival. I was in attendance and had the opportunity of meeting William Saunders, the filmmaker that produced the documentary. William joins us this week to discuss the making of Killing Music as well as the work he's done with his production company Fourth Media. Intro:     "All Road Lead to Ruin" - composed and recorded by Mike Hill Outro:   "Apparatus" - Deadguy  

ruin beer festivals deadguy william saunders underground arts killing music decibel metal
Prescribed Listening
Dr. William Saunders - Emergency Medicine

Prescribed Listening

Play Episode Listen Later Oct 22, 2021 14:54


In this episode of Prescribed Listening from The University of Toledo Medical Center, Dr. William Saunders, the head of emergency medicine at UTMC. He offers insight into the specialty, discusses what students should know, and how the staff has managed throughout the pandemic.

Music Works
4.5: Music for TV and film - a glimpse behind the scenes

Music Works

Play Episode Listen Later Oct 7, 2021 27:17


William Saunders, Director of Media and Creatives at Mediatracks, talks about the vision he and his father share for the company and its artists, and offers us a tantalising glimpse into his life as a professional organist. You can find information about William and Mediatracks at william-saunders.info and mediatracks.co.uk If you enjoy this conversation, please subscribe, check out our other great episodes, and even better leave us a review. You can also follow us on social media and sign up to our mailing list at www.polyphonyarts.com/mailing-list  for updates and news about Music Works and Polyphony Arts. Music Works is generously supported by Allianz Musical Insurance, the UK's No. 1 musical instrument insurer.

Cafeteria Catholics
Fr. William Saunders: The Mother of God

Cafeteria Catholics

Play Episode Listen Later Sep 30, 2021 60:18


www.cafeteriacatholicscomehome.comhttps://instituteofcatholicculture.org/search?terms=jesus

JOURNEY HOME
Dr. William Saunders

JOURNEY HOME

Play Episode Listen Later Aug 24, 2021 60:00


Dr. Bill Saunders from the Institute of Human Ecology at Catholic University of America shares how St. Josephine Bakhita helped lead him from Evangelical Protestantism to the Catholic faith. Marcus Grodi hosts.

JOURNEY HOME
2021-08-24 - Dr. William Saunders

JOURNEY HOME

Play Episode Listen Later Aug 24, 2021 60:00


Dr. Bill Saunders from the Institute of Human Ecology at Catholic University of America shares how St. Josephine Bakhita helped lead him from Evangelical Protestantism to the Catholic faith. Marcus Grodi hosts.

JOURNEY HOME
JOURNEY HOME - 2021-08-23 - Dr. William Saunders

JOURNEY HOME

Play Episode Listen Later Aug 24, 2021 60:00


Dr. Bill Saunders from the Institute of Human Ecology at Catholic University of America shares how St. Josephine Bakhita helped lead him from Evangelical Protestantism to the Catholic faith. Marcus Grodi hosts.

EWTN NEWS NIGHTLY
EWTN News Nightly | Tuesday, July 13, 2021

EWTN NEWS NIGHTLY

Play Episode Listen Later Jul 13, 2021 30:00


On "EWTN News Nightly" tonight: A key US House subcommittee has approved President Joe Biden's spending bill without including the Hyde Amendment. Pro-life Republicans say they plan to fight the bill the way it's written. And facing pressure from his allies to expand voter access, President Biden visited Philadelphia and gave a speech, telling his audience: “Ensuring every vote is counted has always been the most patriotic thing we can do.” The International Religious Freedom Summit is a three day event which brings together a broad coalition of people who support the rights of the faithful. The event kicked off today. Ambassador Sam Brownback, co-chair of the summit and one of the speakers at the event, tells us about it, why it is so important and why it is being held now. One of the speakers is Chen Guangcheng, a lawyer who exposed a forced abortions program in his native China and spent four years in prison. Distinguished fellow at the Center for Human Rights at the Catholic University of America, Chen Guangcheng, is joined by the Director of the Center for Human Rights at Catholic University, William Saunders to discuss the importance of Chen's activism and the summit. Finally this evening, as the coordinator of Catholic Care for Children International, Sister Niluka Perera joins to share what she spoke about at the event, "Sisters Empowering Women, Taking Care: The Mission of Religious Women." Don't miss out on the latest news and analysis from a Catholic perspective. Get EWTN News Nightly delivered to your email: https://ewtn.com/enn

EWTN NEWS NIGHTLY
EWTN NEWS NIGHTLY - 2021-07-13 - EWTN News Nightly | Tuesday, July 13, 2021

EWTN NEWS NIGHTLY

Play Episode Listen Later Jul 13, 2021 30:00


On "EWTN News Nightly" tonight: A key US House subcommittee has approved President Joe Biden's spending bill without including the Hyde Amendment. Pro-life Republicans say they plan to fight the bill the way it's written. And facing pressure from his allies to expand voter access, President Biden visited Philadelphia and gave a speech, telling his audience: “Ensuring every vote is counted has always been the most patriotic thing we can do.” The International Religious Freedom Summit is a three day event which brings together a broad coalition of people who support the rights of the faithful. The event kicked off today. Ambassador Sam Brownback, co-chair of the summit and one of the speakers at the event, tells us about it, why it is so important and why it is being held now. One of the speakers is Chen Guangcheng, a lawyer who exposed a forced abortions program in his native China and spent four years in prison. Distinguished fellow at the Center for Human Rights at the Catholic University of America, Chen Guangcheng, is joined by the Director of the Center for Human Rights at Catholic University, William Saunders to discuss the importance of Chen's activism and the summit. Finally this evening, as the coordinator of Catholic Care for Children International, Sister Niluka Perera joins to share what she spoke about at the event, "Sisters Empowering Women, Taking Care: The Mission of Religious Women." Don't miss out on the latest news and analysis from a Catholic perspective. Get EWTN News Nightly delivered to your email: https://ewtn.com/enn

Cafeteria Catholics
LECTURE SERIES - Rev. William Saunders: Pinches of Incense

Cafeteria Catholics

Play Episode Listen Later May 11, 2021 64:28


Cafeteria Catholics
CATHOLIC CATECHETICS - Fr. William Saunders: God the Father

Cafeteria Catholics

Play Episode Listen Later Apr 5, 2021 75:10


Faith and Law
Covid and the Courts: Current threats to Religious Freedom

Faith and Law

Play Episode Listen Later Mar 5, 2021 55:01


Because of the Covid pandemic, many jurisdictions have placed limits on religious worship. Protests that such limits infringe the religious liberty guarantees of the First Amendment have reached the Supreme Court. What are the permissible limits on religious worship? How can we expect the Supreme Court to rule before its terms ends in June?Mark Rienzi is Professor at The Catholic University of America, Columbus School of Law, and President of the Becket Fund for Religious Liberty. Mark teaches constitutional law, religious liberty, torts, and evidence. He has been voted Teacher of the Year three years in a row, and he is widely published, including in the Harvard Law Review. He is Director of the Center for Religious Liberty at the Columbus School of Law.Mark has broad experience litigating First Amendment cases. He represented the winning parties in a variety of Supreme Court First Amendment cases including Hobby Lobby, Wheaton College, and Holt. In January 2014, Mark successfully argued before the Supreme Court in McCullen v. Coakley, a First Amendment challenge to a Massachusetts speech restriction outside of abortion clinics, winning the case 9-0. Mark and his colleagues at Becket won several important religious liberty cases at the Supreme Court in the past year, including Our Lady of Guadalupe, Little Sisters of the Poor, and Agudath v. Cuomo.William Saunders is a graduate of the Harvard Law School, who has been involved in issues of public policy, law and ethics for thirty years. A regular columnist for the National Catholic Bioethics Quarterly, Mr. Saunders has written and spoken widely on these topics. He is the Director of the Program in Human Rights for the Institute for Human Ecology at The Catholic University of America. (For information about his innovative Master of Arts in Human Rights, go to mahumanrights.com) Saunders works closely with Chinese dissident and CUA Distinguished Fellow, Chen Guangcheng, on human rights issues, and he is co-director of the Center for Religious Liberty at the Columbus School of Law. Mr. Saunders’ new book, Unborn Human Life and Fundamental Rights: Leading Constitutional Cases Under Scrutiny, was published in 2019.Support the show (http://www.faithandlaw.org/donate)

EWTN NEWS NIGHTLY
EWTN NEWS NIGHTLY - 02/08/2021 - EWTN News Nightly | Monday, February 8, 2021

EWTN NEWS NIGHTLY

Play Episode Listen Later Feb 8, 2021 30:00


On EWTN News Nightly tonight: The first variant of the coronavirus was first discovered in South Africa, and now the country has suspended plans to immunize frontline health workers with the AstraZeneca vaccine. As our nation continues to see a decline in overall Covid-19 cases as well as hospital admissions, President Joe Biden visited a vaccination site in Arizona, Monday—virtually--from the White House. Meanwhile, Treasury Secretary Janet Yellen believes the effects of President Biden's $1.9 trillion relief plan will restore full employment by next year. However, many GOP Senators say the president's proposal is too expensive and could trigger runaway inflation. In Rome, the Ambassador of Japan to the Holy See, Seiji Okada, joins to share what he hopes to accomplish in his role. Virginia's Catholic bishops are voicing their support, as legislation to abolish the death penalty passes both the House of Delegates and the State Senate. Bishop Michael Burbidge joins to discuss the Catechism of the Catholic Church in terms of the dignity of the person and why that's so important for people to keep in mind when thinking about the issue of the death penalty. And finally, on the World Day of Prayer Against Human Trafficking and the feast day of Saint Josephine Bakhita, law fellow and director of the program in human rights at the Catholic University of America, William Saunders, joins to share what we are seeing, globally, in terms of the number of people affected by human trafficking, and what type of impact the coronavirus has had on it. Don't miss out on the latest news and analysis from a Catholic perspective. Get EWTN News Nightly delivered to your email: https://ewtn.com/enn

Live Hour on WNGL Archangel Radio
Episode 195: 2-1-21 Monday_LACM_William Saunders_Karlo Broussard_Rob Artigo

Live Hour on WNGL Archangel Radio

Play Episode Listen Later Feb 1, 2021 49:36


William Saunders talked about St Josephine Bakhita. Karlo Broussard discussed the Saints. Rob Artigo shared about the history of the Crystal Cathedral in Orange County, Ca.

Catholic Forum
Catholic Forum, Dec. 26, 2020 - Guest: Fr. William Saunders

Catholic Forum

Play Episode Listen Later Dec 26, 2020 29:43


On this episode of Catholic Forum, after a brief introduction, the Gospel for the Feast of the Holy Family, and a cut from the John Michael Talbot CD, "The Birth of Jesus: A Celebration of Christmas," we talk with Father William Saunders about his book, "Celebrating a Merry Catholic Christmas: A Guide to the Customs and Feast Days of Advent and Christmas." The Christmas season does not end on December 25th...it's actually just beginning. We will find out more on this week's Catholic Forum. 

The Daily Gardener
December 7, 2020 Edward Tuckerman, William Saunders, Phipps Conservatory, Henry Rowland-Brown, The Art of the Garden by Relais & Châteaux North America and Willa Cather

The Daily Gardener

Play Episode Listen Later Dec 7, 2020 18:01


Today we celebrate the botanist who saved the Lewis and Clark specimen sheets. We'll also learn about the successful botanist and garden designer who introduced the navel orange. We’ll recognize the Conservatory stocked by the World’s Fair. We'll hear a charming verse about the mistletoe by a poet entomologist. We Grow That Garden Library™ with a book featuring fifteen incredible private gardens in North America. And then we’ll wrap things up with the American writer who wrote about the natural world with simplicity and honesty.   Subscribe Apple | Google | Spotify | Stitcher | iHeart To listen to the show while you're at home, just ask Alexa or Google to “Play the latest episode of The Daily Gardener Podcast.” And she will. It's just that easy.   The Daily Gardener Friday Newsletter Sign up for the FREE Friday Newsletter featuring: A personal update from me Garden-related items for your calendar The Grow That Garden Library™ featured books for the week Gardener gift ideas Garden-inspired recipes Exclusive updates regarding the show Plus, each week, one lucky subscriber wins a book from the Grow That Garden Library™ bookshelf.   Gardener Greetings Send your garden pics, stories, birthday wishes, and so forth to Jennifer@theDailyGardener.org.   Curated News Is Mistletoe More Than Just An Excuse For A Kiss? | Kew | Michael F Fay   Facebook Group If you'd like to check out my curated news articles and blog posts for yourself, you're in luck because I share all of it with the Listener Community in the Free Facebook Group - The Daily Gardener Community.   So, there’s no need to take notes or search for links. The next time you're on Facebook, search for Daily Gardener Community, where you’d search for a friend… and request to join. I'd love to meet you in the group.   Important Events December 7, 1817 Today is the birthday of the American botanist and professor Edward Tuckerman. A specialist of lichens and other Alpine plants, Edward helped found the Natural History Society of Boston. As a professor at Amherst College, Edward spent his spare time botanizing in the White Mountains of New Hampshire. Today Tuckerman Ravine is named in honor of Edward Tuckerman. America owes a debt of gratitude to Edward for rescuing some of the Lewis and Clark specimens at an auction. It turns out that after the Lewis and Clark Expedition, a botanist named Frederic Pursh was hired by Meriwether Lewis to process the plants from their trip. After butting heads with his boss Benjamin Smith Barton and Meriwether’s apparent suicide, Frederick Pursh took the Lewis and Clark specimens and went to England. Once in England, Pursh reached out to botanists Sir James Edward Smith and Aylmer Lambert about putting together the Flora of North America. Ultimately, Aylmer became his botanical fairy godfather. Aylmer had a substantial personal botanical library, herbarium, and funding. Aylmer also forced Pursh to be productive. Frederick Pursh was kind of a rough and tough guy, and he was an alcoholic. Aylmer made a space for Frederick in the attic of his house. Once Aylmer got him up there, he would lock Frederick in for stretches at a time to keep him focused on the project. It was an extreme way to deal with Frederick’s demons, but it worked. It took Pursh two years to complete the Flora of North America, and the whole time he was racing against Thomas Nuttall, who was working on the same subject back in America. American botanists felt Frederick Pursh had pulled the rug out from under them when he took the expedition specimens to England. And this is where Edward Tuckerman enters the story. Somehow Edward learned that the Lewis and Clark specimens that Pursh had brought to England were going to auction. It turns out Aylmer had hung on to all of Pursh’s material, including the Lewis and Clark originals. In 1842, after Aylmer died, the Lewis and Clark specimens and papers were up for auction along as part of his estate. Somehow Edward realized the value and the important legacy of these botanical specimens and papers. After winning the items, Edward eventually donated all of the material to the Academy of Natural Sciences in Philadelphia.   December 7, 1822 Today is the birthday of the English-American botanist, nurseryman, landscape gardener, and landscape designer William Saunders. William served as the first horticulturist and superintendent of the experimental gardens at the newly created U.S. Department of Agriculture. During his professional career, William enjoyed many successes, but two stand out above the rest. First, William designed the Soldiers' National Cemetery at Gettysburg. On November 17, 1863, William visited the White House to show President Abraham Lincoln his design for the cemetery near the Gettysburg battlefield. William thoughtfully made sure that the Union army dead would be organized by state. A devoted botanist, William’s design was the setting for Lincoln’s Gettysburg Address, an ode to the fallen soldiers interred there. William’s second major accomplishment was introducing the seedless Navel Orange to California. After William had received cuttings from a navel orange tree in Bahia Brazil, he forwarded the cuttings to a friend named Eliza Tibbetts, who had recently settled in a town called Riverside, fifty-five miles east of Los Angeles. Eliza and her husband, Luther, planted the navel oranges in their front yard. They watered the trees with dishwater, and both of the trees flourished. In California, navel oranges are picked from October through the end of May. Navel oranges are known for their sweetness and the sweet little navel on the blossom end. A ripe navel orange should have thin, smooth skin with no soft spots. The orange should feel firm, and the riper the orange, the heavier it should feel. The sweetest time to eat navel oranges is after Thanksgiving; that’s when their flavor and color are at their peak. Because navel oranges are seedless, they can only be propagated by cutting. Over the years, Eliza and her husband took so many cuttings of the original two trees that they nearly killed them. In the early 1880s, they sold enough cuttings at a dollar apiece to make over $20,000 a year - that’s over half a million dollars by today’s standards. Ironically, in the 1930s, Brazil’s entire navel orange crop was destroyed by disease. In response, the USDA sent cuttings of Tibbett’s navel oranges to restart Brazil’s navel orange orchards. Today, every navel orange grown in the world is descended from the cuttings William Saunders sent Eliza Tibbetts. Today, one of the Tibbett’s navel orange trees still stands on the corner of Magnolia and Arlington avenues in Riverside. The tree has been a protected California Historic Landmark since 1932.   December 7, 1893  On this day, the Phipps Conservatory first opened to the public. A gift from Henry Phipps, Jr. to the City of Pittsburgh, Henry was a childhood friend and business partner of Andrew Carnegie. And gardeners who know their garden history probably already know that the Crystal Palace by Joseph Paxton inspired the 14-room glasshouse at the Phipps Conservatory. In 1893, as the Chicago World’s Fair ended, the plant material was fortuitously available to the highest bidder, and over 8,000 plants ended up on 15 train cars headed east to the Phipps. And that’s how the Phipp’s Conservatory ended up benefiting from impeccable timing; stocking their brand new space with incredible plants for a botanical bargain on a scale never seen before or since. In 2018, the Phipps Conservatory and botanical gardens celebrated their 125th Anniversary. Today the Phipps encompasses fifteen acres and includes 23 distinct gardens.   Unearthed Words There's a sound of a festive morrow, It rings with delight over the snow, Dispelling the shadows of sorrow With promise that makes the heart glow... An angel peeps in at the window, And smiles as he looketh around, And kisses the mistletoe berries That wave o'er the love-hallowed ground. — Henry Rowland Brown, English entomologist, and poet, Christmas Eve   Grow That Garden Library The Art of the Garden by Relais & Châteaux North America This book came out in 2018, and the subtitle is Landscapes, Interiors, Arrangements, and Recipes Inspired by Horticultural Splendors. Established in 1954, Relais & Châteaux is an association of the world's finest hoteliers, chefs, and restaurateurs who have set the standard for hospitality excellence. In this book, fifteen incredible establishments from Relais & Châteaux share their inspiring ideas for seasonal gardening, interior design, and entertaining. These elite hospitality experts share these exclusive beautifully-designed environments. And, they don’t leave you guessing. The authors show you how to translate their savoir-faire into indoor and outdoor sanctuaries and incredible events at home. The gardens featured range from simple cutting and kitchen gardens to more elaborate formal plantings, including parterres and topiaries. The garden’s delights are then brought indoors via botanical prints, textiles, wallpapers, and art objects, like metal and porcelain flowers. This resource also shares smart ideas for setting a festive table using rose petals, garlands, and bud vases. They even share their secrets for dressing up dishes and cocktails with edible flower garnishes. This book is a must-read for passionate gardeners who long to bring the sparkle and freshness of the outdoors into the home. This book is 240 pages of the finest horticultural havens at fifteen top Relais & Châteaux locations in America. You can get a copy of The Art of the Garden by Relais & Châteaux North America and support the show using the Amazon Link in today's Show Notes for around $30   Today’s Botanic Spark Reviving the little botanic spark in your heart December 7, 1873   Today is the birthday of the American writer Willa Cather. Remembered for her novels of frontier life like O Pioneers! and My Ántonia, Willa won a Pulitzer for her World War I novel called One of Ours. Here’s an excerpt that will delight the ears of gardeners from Cather’s My Antonia. The story’s narrator is Antonia’s friend Jim Burden. In this excerpt, Jim is lying on the ground in his grandmother’s garden as the warm sun shines down on him: The earth was warm under me, and warm as I crumbled it through my fingers. Queer little red bugs came out and moved in slow squadrons around me. Their backs were polished vermilion, with black spots. I kept as still as I could. Nothing happened. I did not expect anything to happen. I was something that lay under the sun and felt it, like the pumpkins, and I did not want to be anything more. I was entirely happy. Perhaps we feel like that when we die and become a part of something entire, whether it is sun and air, or goodness and knowledge. At any rate, that is happiness; to be dissolved into something complete and great. When it comes to one, it comes as naturally as sleep. — Willa Cather, American writer, My Antonia   Thanks for listening to The Daily Gardener. And remember: "For a happy, healthy life, garden every day."

Faith and Law
The Report on Unalienable Rights

Faith and Law

Play Episode Listen Later Oct 16, 2020 52:15


The Department of State's Commission on Unalienable Rights, chaired by Harvard professor Mary Ann Glendon, issued its report on human rights in U.S. foreign policy in July, examining human rights from the perspective of both America's foundational principles and the Universal Declaration of Human Rights. Professors Robert George of Princeton and William Saunders of Catholic University discuss the report and examine its relevance for a deep and clear understanding of human rights and responsibilities. Support the show (http://www.faithandlaw.org/donate)

Faith and Law
The Chinese Communist Party and the Coronavirus

Faith and Law

Play Episode Listen Later Apr 24, 2020 34:11


What are the most important lessons to learn from the pandemic? Listen as Chen Guangcheng and William Saunders discuss this question in light of the latest information from sources in China. Click here to view a transcript of Mr. Chen's talk.Chen Guangcheng is a Chinese civil rights lawyer and activist who has been a persistent voice for freedom, human dignity, and the rule of law in his native country. Working in rural communities in China, where he was known as the “barefoot lawyer,” Chen advocated for the rights of disabled people, and organized class-action litigation against the government’s violent enforcement of its one-child policy. Blind since his childhood, Chen is self-taught in the law. His human rights activism resulted in his imprisonment by the Chinese government for four years, beginning in 2006; after his release he remained under house arrest, until his escape from confinement in 2012, whereupon he came to the United States, where he was a scholar at New York University in 2012-13. Mr. Chen is a Distinguished Fellow at the Catholic University of America. William Saunders is a graduate of the Harvard Law School, who has been involved in issues of public policy, law and ethics for thirty years. A regular columnist for the National Catholic Bioethics Quarterly, Mr. Saunders has written widely on these topics, as well as on Catholic social teaching. He has given lectures in law schools and colleges throughout the United States and the world. He is the Director of the Program in Human Rights for the Institute for Human Ecology.Support the show (http://www.faithandlaw.org/donate)

Catholic Forum
Catholic Forum, March 21, 2020 - Guest: Fr. William Saunders

Catholic Forum

Play Episode Listen Later Mar 21, 2020 30:00


On this episode of Catholic Forum, after a brief introduction, the Gospel for the Fourth  Sunday of Lent, and a musical selection from the CD Catholic Treasures, we will talk to Father William Saunders, author of the book, “Celebrating a Holy Catholic Easter: A Guide to the Customs and Devotions of Lent and the Season of Christ’s Resurrection.” The book not only provides the historical roots of traditions, but also has spiritual reflections and suggestions for practices. All of the major events of Holy Week are included, so that families can better appreciate the significance of these traditions.  Also, we will learn about Fr. Marie Eugene, another one of Father Rich Jasper’s Modern Day Witnesses. 

WORLD OVER
World Over - 2020-03-05 - Full Episode with Raymond Arroyo

WORLD OVER

Play Episode Listen Later Mar 5, 2020 60:00


STEVEN MOSHER, China expert and president of the Population Research Institute with analysis of the current status of the Catholic Church in China under the Vatican-China agreement, and the recent attacks on retired Hong Kong Bishop, Cardinal Joseph Zen for his outspoken opposition to the agreement. GARY KRUPP, president of the Pave The Way Foundation discusses the recent opening of the Vatican's archives on the pontificate of WWII era Pope Pius XII. KATRINA JACKSON, attorney, pro-life Democrat and Louisiana State Senator discusses her sponsorship of pro-life legislation aiming to protect the health of women and the unborn that is at the center of an abortion case being heard by the US Supreme Court. FR. WILLIAM SAUNDERS , priest of the Diocese of Arlington, VA and author of the new book, Celebrating a Holy Catholic Easter: A Guide to the Customs and Devotions of Lent and the Season of Christ's Resurrection.

WORLD OVER
World Over - 2020-03-05 - Full Episode with Raymond Arroyo

WORLD OVER

Play Episode Listen Later Mar 5, 2020 60:00


STEVEN MOSHER, China expert and president of the Population Research Institute with analysis of the current status of the Catholic Church in China under the Vatican-China agreement, and the recent attacks on retired Hong Kong Bishop, Cardinal Joseph Zen for his outspoken opposition to the agreement. GARY KRUPP, president of the Pave The Way Foundation discusses the recent opening of the Vatican's archives on the pontificate of WWII era Pope Pius XII. KATRINA JACKSON, attorney, pro-life Democrat and Louisiana State Senator discusses her sponsorship of pro-life legislation aiming to protect the health of women and the unborn that is at the center of an abortion case being heard by the US Supreme Court. FR. WILLIAM SAUNDERS , priest of the Diocese of Arlington, VA and author of the new book, Celebrating a Holy Catholic Easter: A Guide to the Customs and Devotions of Lent and the Season of Christ's Resurrection.

Morning Drive – Mater Dei Radio
Morning Blend Guest: Fr. William Saunders, Author

Morning Drive – Mater Dei Radio

Play Episode Listen Later Feb 20, 2020 9:52


Coffee and Donuts host Mary Harrell talks with Fr. Saunders about his book, “Celebrating a Holy Catholic Easter.” The post Morning Blend Guest: Fr. William Saunders, Author appeared first on Mater Dei Radio.

Wellington Rocks!
Episode 39 - Planet Hunter

Wellington Rocks!

Play Episode Listen Later Oct 30, 2019 26:34


This week on Wellington Rocks we talk to alternative stoner rockers, Planet Hunter. Formed from the ashes of Wellington bands Mangle & Gruff and Killing Bear, Planet Hunter formed in 2017 with bassist Jedaiah Van Ewijk, vocalist Cormac Ferris, guitarist William Saunders and drummer David McGurk. Influenced by a variety of progressive, stoner and alternative rock acts, Planet Hunter quickly made themselves a name on the live circuit opening for the likes of Ultranauts, Opium Eater and Into Orbit amongst others and released the raw single "The Bigger They Are". Now they are set to release their debut four track EP on November 1st, recorded in a shipping container in Tawa, taking the DIY noise of alt rock to Wellington's regional suburbs. We talk about the recording process, how the band formed, the writing process and dealing with "reject songs" and ask, is it possible for a dive bar to get sick of a band playing there too much? Wellington Rocks is bought to you by Access Radio Wellington and NZ On Air. Check out Planet Hunter here: https://planethunterband.bandcamp.com You can support Wellington Rocks here: https://www.patreon.com/WellingtonRocks Playlist: Celestial Tongue - Planet Hunter Dynotrash - Planet Hunter Bitter Winds - Planet Hunter Dawn Of The Ants - Planet Hunter

The Terry & Jesse Show
27 Aug 2019 – Mystical Theology, Freemasonry and Marian Conversions

The Terry & Jesse Show

Play Episode Listen Later Aug 27, 2019


Today's Topics: 1] Matthew 23:23-26 But these you should have done, without neglecting the others. 1a] St Monica pray for us. 2] The demise of mystical theology - If celibate priests and religious are deprived of the teaching that enables them to come to know and experience the love of God, then the disasters that we have seen everywhere in the Church in recent years were bound to happen - https://www.catholicstand.com/the-demise-of-mystical-theology/ 3] Catholics and Freemasonry FR. WILLIAM SAUNDERS https://www.catholiceducation.org/en/culture/catholic-contributions/catholics-and-freemasonry.html 4] Non-Catholics Who Encountered Jesus Through Mary - http://www.ncregister.com/blog/guest-blogger/once-in-love-with-mary

Carmelite Conversations
The Way of the Cross with the Carmelite Saints

Carmelite Conversations

Play Episode Listen Later Feb 29, 2016 56:38


The Way of the Cross is a remarkably powerful and grace filled devotion, one we should certainly find time to practice during the Season of Lent. In this particular program Mark and Frances draw from the writings of the great Carmelite Saints to provide a complete reflection on each of the Stations of the Cross. Each reflection includes a brief statement on the significance of a particular Station, a verse from the Bible that enhances and expands our understanding of that Station, and then a reflection from one of the Carmelite Saints, which seeks to further deepen our experience and encounter with the Man of Sorrows and His Passion. This is a particularly moving series of reflections and it is a program best listened to when you have the time to be quiet, reflective and in a situation to meditate on each of the readings offered along the Way of the Cross. This is a program rich in material for our sanctification and will be one that many people will want to listen to more than once. RESOURCESBooks:“The Way of the Cross with the Carmelite Saints” Compiled and Illustrated by Sister Joseph Marie, Carmelite Hermit of the Trinity; ICS Publications.“Meditations on the Way of the Cross of Albert Servaes” by (Blessed) Titus Brandsma, O. Carm; Carmelite Press Publication.“Calvary and the Mass” by (Archbishop) Fulton J. Sheen; P. J. Kenedy & Sons, Publishers, 1936.“The School of Jesus Crucified: the Lessons of Calvary in Daily Catholic Life” by Father Ignatius of the Side of Jesus, Passionist; Tan Books. Article:“How Did the Stations of the Cross Begin?” by Fr. William Saunders, found on www.ewtn.com.

Catholic Identity Lectures
UST 2007 Archbishop Miller Lecture, William Saunders

Catholic Identity Lectures

Play Episode Listen Later Oct 19, 2014 108:59


William L. Saunders, a senior fellow and director of Center for Human Life and Bioethics Family Research Council delivered the 2007 Archbishop Miller Lecture titled, "International Law, the Family, and the US Supreme Court." Also included is "The Question of 'Rights' to Gay Marriage and Abortion Effects on the Roberts' Court." The lecture is named in honor of University of St. Thomas President Emeritus Archbishop J. Michael Miller, CSB, and made possible through the generosity of the John W. and Alida M. Considine Foundation.

Civil Rights History Project

William Saunders oral history interview conducted by Kieran Walsh Taylor in Charleston, South Carolina, 2011-06-09.

Just Released Podcast
Episode 12 - Rahe, Paul Starling, Derek Clegg, William Saunders, Heather Thornton, Jen Lawless

Just Released Podcast

Play Episode Listen Later Jul 29, 2011


ul {font-family: verdana;list-style-type:none;} li {padding: 5px;} div#info {font-family: verdana} Episode 12Get in touchemail: justreleased@mattearly.comtwitter: @Matt_LDNRahe - Two Steps Back (Out of the Box)Paul Starling - All Of My HeartDerek Clegg - Last SummerWilliam Saunders - Take MeHeather Thornton - On My Way (Obvious)Jen Lawless - Get Over It (Runnin' Hot)

Yarnspinners Tales's Podcast
YST Episode 61 A New Wheel

Yarnspinners Tales's Podcast

Play Episode Listen Later May 6, 2011 53:46


It only happens now and then in a spinner's life, the decision to buy a spinning wheel.  I had that chance last month, and am now the happy owner of a Majacraft Aura.  I talk about my story of finding the wheel, and what I have learned in the few weeks of owning it. A big thank you to Clare Dowling for her Spinning Song, that I've used for the pod cast's opening for all these years. The closing music for today's podcast is by William Saunders and is called Chasing Daylight. If you'd like to join us in the spinning technique spin along, go to Ravelry, search for the group Yarnspinnerstales spin in, and find the specific threads for each month's technique.  With done a single study, lock spinning, and spinning from the fold.

Improving Education for English Learners - Webinar Series

In this Webinar from the series on English language learners, William Saunders, Research Associate at the University of California, Los Angeles, and Claude Goldenberg, Professor of Education at Stanford University, provide an overview of the current research with the aim of identifying effective guidelines for English language development (ELD) instruction. First, the presenters define and explain ELD instruction and explain how it differs from sheltered content instruction. They then present guidelines for ELD instruction derived from the six syntheses and meta-analyses they reviewed and evaluate the strength of the evidence for each of these guidelines.

Summer Consortium
Benedict XVI and the Compendium of the Catechism | Fr. William Saunders

Summer Consortium

Play Episode Listen Later Jul 22, 2006 48:50


Fr. William Saunders, speaking on “Benedict XVI and the Compendium of the Catechism,” informed the audience that the current pope was very much responsible for the success of both the Catechism and the newly released Compendium of the Catechism.“The Compendium is a beautiful summary, if you will, of the Catechism of the Catholic Church. The truths of the Faith are laid out in the traditional question-and-answer format and each chapter begins with a beautiful painting which itself teaches an important facet of the Faith,” he explained. “Pope Benedict knew that the Church needed a simple format for helping spread the Faith and to help catechize the ignorant. Everyone should read this little book and better understand, in a succinct and clear manner, the Roman Catholic Faith.”