Podcast appearances and mentions of Stuart J Russell

  • 5PODCASTS
  • 5EPISODES
  • 40mAVG DURATION
  • ?INFREQUENT EPISODES
  • Sep 18, 2023LATEST

POPULARITY

20172018201920202021202220232024


Latest podcast episodes about Stuart J Russell

The Ezra Klein Show
Should we press pause on AI?

The Ezra Klein Show

Play Episode Listen Later Sep 18, 2023 57:09


How worried should we be about AI? Sean Illing is joined by Stuart J. Russell, a professor at the University of California Berkeley and director of the Center for Human-Compatible AI. Russell was among the signatories who wrote an open letter asking for a six-month pause on AI training. They discuss the dangers of losing control of AI and what the upsides of this rapidly developing technology could be.  Host: Sean Illing (@seanilling), host, The Gray Area Guest: Stuart J. Russell, professor at the University of California Berkeley and director of the Center for Human-Compatible AI References:  Pause Giant AI Experiments: An Open Letter “AI has much to offer humanity. It could also wreak terrible harm. It must be controlled.” by Stuart Russell (The Observer, April 2023) Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig (Pearson Education International)  Human-Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (Penguin Random House, 2020) “A Conversation With Bing's Chatbot Left Me Deeply Unsettled” by Kevin Roose (New York Times, February 2023) Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts. Subscribe for free. Be the first to hear the next episode of The Gray Area by subscribing in your favorite podcast app. Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcasts This episode was made by:  Engineer: Patrick Boyd Deputy Editorial Director, Vox Talk: A.M. Hall Learn more about your ad choices. Visit podcastchoices.com/adchoices

White House Chronicle
AI: A 'Pause' for a Transformative Technology

White House Chronicle

Play Episode Listen Later May 26, 2023 27:42


Host Llewellyn King and Co-host Adam Clayton Powell III have a wide-ranging discussion with Stuart J. Russell, Professor of Computer Science at the University of California, Berkeley, on why artificial intelligence must be controlled.

We Are Not Saved
Books I Finished in April

We Are Not Saved

Play Episode Listen Later May 5, 2020 28:59


Super Thinking: The Big Book of Mental Models By: Gabriel Weinberg and Lauren McCann Human Compatible: Artificial Intelligence and the Problem of Control By: Stuart J. Russell Joseph Smith’s First Vision: Confirming Evidences and Contemporary Accounts By: Milton Vaughn Backman The Cultural Evolution Inside of Mormonism By: Greg Trimble  Destiny of the Republic: A Tale of Madness, Medicine and the Murder of a President By: Candice Millard A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream By: Yuval Levin The Worth of War By: Benjamin Ginsberg The Pioneers: The Heroic Story of the Settlers Who Brought the American Ideal West By: David McCullough Sex and Culture By: J. D. Unwin Euripides I: Alcestis, Medea, The Children of Heracles, Hippolytus By: Euripides

Carnegie Council Audio Podcast
The Future of Artificial Intelligence, with Stuart J. Russell

Carnegie Council Audio Podcast

Play Episode Listen Later Feb 24, 2020 45:24


UC Berkley's Professor Stuart J. Russell discusses the near- and far-future of artificial intelligence, including self-driving cars, killer robots, governance, and why he's worried that AI might destroy the world. How can scientists reconfigure AI systems so that humans will always be in control? How can we govern this emerging technology across borders? What can be done if autonomous weapons are deployed in 2020?

Carnegie Council Audio Podcast
The Future of Artificial Intelligence, with Stuart J. Russell

Carnegie Council Audio Podcast

Play Episode Listen Later Feb 24, 2020 45:24


UC Berkley's Professor Stuart J. Russell discusses the near- and far-future of artificial intelligence, including self-driving cars, killer robots, governance, and why he's worried that AI might destroy the world. How can scientists reconfigure AI systems so that humans will always be in control? How can we govern this emerging technology across borders? What can be done if autonomous weapons are deployed in 2020?