POPULARITY
The machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same. My guest today is Dr. Iyad Rahwan. Iyad directs the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin. Iyad is a bit hard to categorize. He's equal parts computer scientist and artist; one magazine profile described him as "the Anthropologist of AI." Labels aside, his work explores the emerging relationships between AI, human behavior, and society. In a recent paper, Iyad and colleagues introduced a framework for understanding what they call "machine culture." The framework offers a way of thinking about the different routes through which AI may transform—is transforming—human culture. Here, Iyad and I talk about his work as a painter and how he brings AI into the artistic process. We discuss whether AIs can make art by themselves and whether they may eventually develop good taste. We talk about how AIphaGoZero upended the world of Go and about how LLMs might be changing how we speak. We consider what AIs might do to cultural diversity. We discuss the field of cultural evolution and how it provides tools for thinking about this brave new age of machine culture. Finally, we discuss whether any spheres of human endeavor will remain untouched by AI influence. Before we get to it, a humble request: If you're enjoying the show—and it seems that many of you are—we would be ever grateful if you could let the world know. You might do this by leaving a rating or review on Apple Podcasts, or maybe a comment on Spotify. You might do this by giving us a shout out on the social media platform of your choice. Or, if you prefer less algorithmically mediated avenues, you might do this just by telling a friend about us face-to-face. We're hoping to grow the show and best way to do that is through listener endorsements and word of mouth. Thanks in advance, friends. Alright, on to my conversation with Iyad Rahwan. Enjoy! A transcript of this episode will be available soon. Notes and links 3:00 – Images from Dr. Rahwan's ‘Faces of Machine' portrait series. One of the portraits from the series serves as our tile art for this episode. 11:30 – The “stochastic parrots” term comes from an influential paper by Emily Bender and colleagues. 18:30 – A popular article about DALL-E and the “avocado armchair.” 21:30 – Ted Chiang's essay, “Why A.I. isn't going to make art.” 24:00 – An interview with Boris Eldagsen, who won the Sony World Photography Awards in March 2023 with an image that was later revealed to be AI-generated. 28:30 – A description of the concept of “science fiction science.” 29:00 – Though widely attributed to different sources, Isaac Asimov appears to have developed the idea that good science fiction predicts not the automobile, but the traffic jam. 30:00 – The academic paper describing the Moral Machine experiment. You can judge the scenarios for yourself (or design your own scenarios) here. 30:30 – An article about the Nightmare Machine project; an article about the Deep Empathy project. 37:30 – An article by Cesar Hidalgo and colleagues about the relationship between television/radio and global celebrity. 41:30 – An article by Melanie Mitchell (former guest!) on AI and analogy. A popular piece about that work. 42:00 – A popular article describing the study of whether AIs can generate original research ideas. The preprint is here. 46:30 – For more on AlphaGo (and its successors, AlphaGo Zero and AlphaZero), see here. 48:30 – The study finding that the novel of human Go playing increased due to the influence of AlphaGo. 51:00 – A blogpost delving into the idea that ChatGPT overuses certain words, including “delve.” A recent preprint by Dr. Rahwan and colleagues, presenting evidence that “delve” (and other words overused by ChatGPT) are now being used more in human spoken communication. 55:00 – A paper using simulations to show how LLMs can “collapse” when trained on data that they themselves generated. 1:01:30 – A review of the literature on filter bubbles, echo chambers, and polarization. 1:02:00 – An influential study by Dr. Chris Bail and colleagues suggesting that exposure to opposing views might actually increase polarization. 1:04:30 – A book by Geoffrey Hodgson and Thorbjørn Knudsen, who are often credited with developing the idea of “generalized Darwinism” in the social sciences. 1:12:00 – An article about Google's NotebookLM podcast-like audio summaries. 1:17:3 0 – An essay by Ursula LeGuin on children's literature and the Jungian “shadow.” Recommendations The Secret of Our Success, Joseph Henrich “Machine Behaviour,” Iyad Rahwan et al. Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).
Most of us would sacrifice one person to save five. It's a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy. That's the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did almost 20 years ago, then updated again in 2017. Historically, the questions posed by The Trolley Problem are great for thought experimentation and conversations at a certain kind of cocktail party. Now, new technologies are forcing that moral quandary out of our philosophy departments and onto our streets. So today, we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that still baffle its creators. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams. EPISODE CREDITS Reported and produced by - Amanda Aronczyk and Bethel HabteOur newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)! Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today. Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org Leadership support for Radiolab's science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make ■Post on this topic (You can get FREE learning materials!) https://englist.me/112-academic-words-reference-from-iyad-rahwan-what-moral-decisions-should-driverless-cars-make-ted-talk/ ■Youtube Video https://youtu.be/Xh4-IJu7au8 (All Words) https://youtu.be/6iR0d3GJ0Yo (Advanced Words) https://youtu.be/NXjBui1fLuU (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Part Six. The final episode in this series looks forward into the future. If we are able to reach the point where we can create advanced AI ‘beings', will we be able to live alongside them – especially if they are in some ways more intelligent than us, or hold our lives in their hands? Phil puts this question to Iyad Rahwan, a social psychologist formerly of MIT's Media Lab, who is working on the ramifications of human-machine interactions. Thanks to Philip Ball for original music and www.Freesound.org for supplying sound effects under creative commons Attribution 3.0 license created by the following artists; Decembered The licence can be read here: https://creativecommons.org/licenses/by/3.0/legalcode
Social media is changing human behavior. How and why are humans being transformed by algorithms? Molly Crockett joins Vasant Dhar in episode 12 of Brave New World to describe her work at the meeting place of technology and morality. Useful resources: 1. Molly Crockett at Yale, Oxford Neuroscience, Google Scholar and Twitter. 2. Crockett Lab. 3. Moral outrage in the digital age -- MJ Crockett. 4. The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online -- William J Brady, MJ Crockett and Jay J Van Bavel. 5. Inference of trustworthiness from intuitive moral judgments -- Jim AC Everett, David A Pizarro and MJ Crockett. 6. The Social Media Industrial Complex -- Episode 3 of Brave New World (w Sinan Aral). 7. How Social Media Threatens Society -- Episode 8 of Brave New World (w Jonathan Haidt). 8. A computational reward learning account of social media engagement -- Björn Lindström and others. 9. The Alignment Problem -- Brian Christian. 10. You and the Algorithm: It Takes Two to Tango -- Nick Clegg. 11. Moral Learning: Conceptual foundations and normative relevance -- Peter Railton. 12. The social dilemma of autonomous vehicles -- Jean-François Bonnefon, Azim Shariff and Iyad Rahwan. 13. Emotion shapes the diffusion of moralized content in social networks -- William J Brady and others.
La inteligencia artificial se inmiscuye en el arte y en nuestras vidas. ¿En qué magnitud? Es lo que investiga el experto en inteligencia artificial y arte Iyad Rahwan en el Instituto Max Planck para el Desarrollo Humano.
Estamos vivendo em uma era em que as nossas tomadas de decisões irão definir se a raça humana irá saltar para um progresso nunca visto antes ou para sua própria extinção definitiva. Você acha que a inteligência artificial irá superar a inteligência humana algum dia? Para debater esse assunto com mais propriedade e lhe preparar para o futuro, convidamos a especialista em Marketing Digital e Inteligência Artificial para negócios, Camila Renaux, e conversamos sobre os ensinamentos do livro “Superinteligência”, de Nick Bostrom.O autor apresenta um olhar mais filosófico dentro do escopo técnico relacionado à Inteligência Artificial, e foi recomendado pelo Bill Gates e Elon Musk.Para conhecer mais sobre o trabalho da Camila Renaux, acesse: camilarenaux.com.br ou o canal dela no Youtube www.youtube.com/camilarenauxTED Talk de Iyad Rahwan sobre as decisões morais dos carros autônomos https://youtu.be/tb-WdVA4_boO ResumoCast é um podcast semanal apresentado por Gustavo Carriconde que investiga um livro de negócios e empreendedorismo em 30 minutos. Assine gratuitamente em resumocast.com.br para receber um novo episódio toda Segunda-feira.
Estamos vivendo em uma era em que as nossas tomadas de decisões irão definir se a raça humana irá saltar para um progresso nunca visto antes ou para sua própria extinção definitiva. Você acha que a inteligência artificial irá superar a inteligência humana algum dia? Para debater esse assunto com mais propriedade e lhe preparar para o futuro, convidamos a especialista em Marketing Digital e Inteligência Artificial para negócios, Camila Renaux, e conversamos sobre os ensinamentos do livro “Superinteligência”, de Nick Bostrom.O autor apresenta um olhar mais filosófico dentro do escopo técnico relacionado à Inteligência Artificial, e foi recomendado pelo Bill Gates e Elon Musk.Para conhecer mais sobre o trabalho da Camila Renaux, acesse: camilarenaux.com.br ou o canal dela no Youtube www.youtube.com/camilarenauxTED Talk de Iyad Rahwan sobre as decisões morais dos carros autônomos https://youtu.be/tb-WdVA4_boO ResumoCast é um podcast semanal apresentado por Gustavo Carriconde que investiga um livro de negócios e empreendedorismo em 30 minutos. Assine gratuitamente em resumocast.com.br para receber um novo episódio toda Segunda-feira.
In this episode, Angie interviews Assistant Professor at the New England Complex Systems Institute (NECSI) and Visiting Scientist and the MIT Media Lab, Alfredo Morales. Professor Morales works in the areas of complex systems, AI, data science, and human behavior to develop both methods and insights that help solve complex societal problems. During his interview, he shares details from the AI and Beyond Program at NECSI, a five day certificate program where he presented alongside, Stephen Wolfram, Iyad Rahwan, and Yaneer Bar-Yam. He also discusses some unintended consequences that could arise from artificial intelligence and how complexity science can help us integrate AI systems more effectively.
As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI? To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the podcast. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University. Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab.
Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy. That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did about 11 years ago. Luckily, the Trolley Problem has always been little more than a thought experiment, mostly confined to conversations at a certain kind of cocktail party. That is until now. New technologies are forcing that moral quandry out of our philosophy departments and onto our streets. So today we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that we can’t even figure out ourselves. This story was reported and produced by Amanda Aronczyk and Bethel Habte. Thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams. Support Radiolab today at Radiolab.org/donate.
Should a driverless car kill you if it means saving five pedestrians? In this primer on the social dilemmas of driverless cars, Iyad Rahwan explores how the technology will challenge our morality and explains his work collecting data from real people on the ethical trade-offs we're willing (and not willing) to make. Hosted on Acast. See acast.com/privacy for more information.
Should your driverless car kill you if it means saving five pedestrians? In this primer on the social dilemmas of driverless cars, Iyad Rahwan explores how the technology will challenge our morality and explains his work collecting data from real people on the ethical trade-offs we're willing (and not willing) to make.
Votre voiture sans conducteur devrait-elle vous tuer pour sauver cinq piétons? Dans cette introduction sur le dilemme social des voitures sans conducteur, Iyad Rahwan examine comment la technologie peut défier notre moralité. Il présente aussi son travail en ayant réuni des informations auprès de la population à propos des compromis éthiques que nous sommes (ou pas) prêts à faire.
Seu carro autônomo deveria matar você, em vez de cinco pedestres? Nesta introdução sobre os dilemas sociais dos carros autônomos, Iyad Rahwan explora como a tecnologia desafiará nossa moralidade e explica o seu trabalho de obtenção de dados de pessoas comuns, sobre quais compromissos éticos estamos (e quais não estamos) dispostos a aceitar.
Si para salvarle a vida a cinco peatones tu vehículo autónomo tuviera que matarte a ti ¿debería hacerlo? En esta introducción a los dilemas sociales de los vehículos autónomos, Iyad Rahwan analiza cómo esta tecnología va a desafiar nuestra moralidad. Asimismo, Rahwan explica su trabajo de recolección de datos a través de encuestas con el público, acerca de los compromisos éticos que estamos dispuestos a hacer, o no.
보행자 다섯 명을 구하기 위해서라면, 당신의 자율주행차는 당신을 죽여야 할까요? 여기 자율주행차의 사회적 딜레마에 관한 입문 강연에서 아이야드 라흐반은 이 기술이 어떻게 우리의 도덕성에 도전 할 것인지를 탐구합니다. 그리고 그의 일에 대해 설명을 하는데, 그것은 실제 사람들로부터 그들이 기꺼이 하려는 (그리고 하지 않으려는) 윤리적인 트레이드 오프에 대한 자료를 수집하는 일입니다.
Sollte Ihr fahrerloses Auto Sie opfern, um fünf Fußgängern das Leben zu retten? In dieser Einführung in soziale Dilemmas von fahrerlosen Autos untersucht Iyad Rahwan, wie die Technologie unsere Moral herausfordern wird. Außerdem berichtet er von seiner Arbeit, bei der er Daten echter Menschen darüber sammelt, welche ethischen Kompromisse sie eingehen (und nicht eingehen) wollen.
Berkman Klein Center for Internet and Society: Audio Fishbowl
AI technologies have the potential to vastly enhance the performance of many systems and institutions, from making transportation safer, to enhancing the accuracy of medical diagnosis, to improving the efficiency of food safety inspections. However, AI systems can also create moral hazards, by potentially diminishing human accountability, perpetuating biases that are inherent to the AI's training data, or optimizing for one performance measure at the expense of others. These challenges require new kinds of "user interfaces" between machines and society. We will explore these issues, and how they would interface with existing institutions. About Joi Ito Joi Ito is the director of the MIT Media Lab, Professor of the Practice at MIT and the author, with Jeff Howe, of Whiplash: How to Survive Our Faster Future (Grand Central Publishing, 2016). Ito is chairman of the board of PureTech Health and serves on several other boards, including The New York Times Company, Sony Corporation, the MacArthur Foundation and the Knight Foundation. He is also the former chairman and CEO of Creative Commons, and a former board member of ICANN, The Open Source Initiative, and The Mozilla Foundation. Ito is a serial entrepreneur who helped start and run numerous companies including one of the first web companies in Japan, Digital Garage, and the first commercial Internet service provider in Japan, PSINet Japan/IIKK. He has been an early-stage investor in many companies, including Formlabs, Flickr, Kickstarter, littleBits, and Twitter. Ito has received numerous awards, including the Lifetime Achievement Award from the Oxford Internet Institute and the Golden Plate Award from the Academy of Achievement, and he was inducted into the SXSW Interactive Festival Hall of Fame in 2014. Ito has been awarded honorary doctorates from The New School and Tufts University. About Iyad Rahwan Iyad Rahwan is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD from the University of Melbourne, Australia, and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation, and the social aspects of Artificial Intelligence. His team built the Moral Machine, which has collected 28 million decisions to-date about how autonomous cars should prioritize risk. Rahwan's work appeared in major academic journals, including Science and PNAS, and was featured in major media outlets, including the New York Times, The Economist, Wall Street Journal, and the Washington Post. More info on this event here: https://cyber.harvard.edu/events/luncheons/2017/04/Ito
Every week a million more people move to live in cities. Can they cope with this constant expansion? Aleks explores whether 'Smart' cities are the answer or do they come with a hidden price of personal freedom. She visits the world's "smartest" city, Masdar in Abu Dhabi and explores the social engineering that's as much part of the design as the bricks and mortar. Contributors: Physicist Geoffrey West, Urban Explorer Brad Garrett, Lean Doody from ARUP, Dr Iyad Rahwan, Architect and Artist Usman Haque. Producer: Peter McManus.