Podcasts about Moral Machine

Online platform

  • 39PODCASTS
  • 43EPISODES
  • 54mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 5, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Moral Machine

Latest podcast episodes about Moral Machine

IB Matters
Engaging your students with Chaahat Dhall

IB Matters

Play Episode Listen Later Jan 5, 2025 42:49


Send us a textOur guest, Chaahat Dhall, from Fountainhead School in Surat, India has some great ways she engages her MYP students. From human libraries, to silent debates, to advisory circles and beyond, Chaahat shares some of the many resources she uses throughout the year. Chaahat was also one of Jiana Shah's MYP teachers, so you can learn from one of the teachers who influenced Jiana as she developed her skills as an MYP student.The list of resources Chaahat has shared is too extensive for these notes, so I will include some here but the entire list can be found on our website.Links from Chaahat: (More on our website)Human Library Publishing Partner A platform that encourages dialogue and understanding through personal storytelling and unique perspectives.Oxfam Hunger BanquetThe Moral MachineA platform for exploring ethical dilemmas and decision-making through AI simulations.Use of Moral MachineA guide on integrating The Moral Machine into classroom discussions and activities.Information Is Beautiful - Current EventsVisualizes current events and data in a compelling and accessible manner. An interactive event designed to raise awareness about global hunger and inequality through experiential learning.Ted EdHelps educators create engaging lesson plans using TED-Ed content.GlobaliaA great resource for maps and visual representations of global topics.IQB (Inspiring Inquiry)Resources and tools to promote inquiry-based learning in classrooms.Email IB Matters: IBMatters@mnibschools.orgTwitter @MattersIBIB Matters websiteMN Association of IB World Schools (MNIB) websiteDonate to IB MattersTo appear on the podcast or if you would like to sponsor the podcast, please contact us at the email above.

Many Minds
The rise of machine culture

Many Minds

Play Episode Listen Later Oct 31, 2024 80:17


The machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same.    My guest today is Dr. Iyad Rahwan. Iyad directs the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin. Iyad is a bit hard to categorize. He's equal parts computer scientist and artist; one magazine profile described him as "the Anthropologist of AI." Labels aside, his work explores the emerging relationships between AI, human behavior, and society. In a recent paper, Iyad and colleagues introduced a framework for understanding what they call "machine culture." The framework offers a way of thinking about the different routes through which AI may transform—is transforming—human culture.    Here, Iyad and I talk about his work as a painter and how he brings AI into the artistic process. We discuss whether AIs can make art by themselves and whether they may eventually develop good taste. We talk about how AIphaGoZero upended the world of Go and about how LLMs might be changing how we speak. We consider what AIs might do to cultural diversity. We discuss the field of cultural evolution and how it provides tools for thinking about this brave new age of machine culture. Finally, we discuss whether any spheres of human endeavor will remain untouched by AI influence.    Before we get to it, a humble request: If you're enjoying the show—and it seems that many of you are—we would be ever grateful if you could let the world know. You might do this by leaving a rating or review on Apple Podcasts, or maybe a comment on Spotify. You might do this by giving us a shout out on the social media platform of your choice. Or, if you prefer less algorithmically mediated avenues, you might do this just by telling a friend about us face-to-face. We're hoping to grow the show and best way to do that is through listener endorsements and word of mouth. Thanks in advance, friends.   Alright, on to my conversation with Iyad Rahwan. Enjoy!   A transcript of this episode will be available soon.   Notes and links 3:00 – Images from Dr. Rahwan's ‘Faces of Machine' portrait series. One of the portraits from the series serves as our tile art for this episode. 11:30 – The “stochastic parrots” term comes from an influential paper by Emily Bender and colleagues. 18:30 – A popular article about DALL-E and the “avocado armchair.” 21:30 – Ted Chiang's essay, “Why A.I. isn't going to make art.” 24:00 – An interview with Boris Eldagsen, who won the Sony World Photography Awards in March 2023 with an image that was later revealed to be AI-generated.  28:30 – A description of the concept of “science fiction science.” 29:00 – Though widely attributed to different sources, Isaac Asimov appears to have developed the idea that good science fiction predicts not the automobile, but the traffic jam.  30:00 – The academic paper describing the Moral Machine experiment. You can judge the scenarios for yourself (or design your own scenarios) here. 30:30 – An article about the Nightmare Machine project; an article about the Deep Empathy project. 37:30 – An article by Cesar Hidalgo and colleagues about the relationship between television/radio and global celebrity. 41:30 – An article by Melanie Mitchell (former guest!) on AI and analogy. A popular piece about that work.   42:00 – A popular article describing the study of whether AIs can generate original research ideas. The preprint is here. 46:30 – For more on AlphaGo (and its successors, AlphaGo Zero and AlphaZero), see here. 48:30 – The study finding that the novel of human Go playing increased due to the influence of AlphaGo. 51:00 – A blogpost delving into the idea that ChatGPT overuses certain words, including “delve.” A recent preprint by Dr. Rahwan and colleagues, presenting evidence that “delve” (and other words overused by ChatGPT) are now being used more in human spoken communication.  55:00 – A paper using simulations to show how LLMs can “collapse” when trained on data that they themselves generated.  1:01:30 – A review of the literature on filter bubbles, echo chambers, and polarization. 1:02:00 – An influential study by Dr. Chris Bail and colleagues suggesting that exposure to opposing views might actually increase polarization.  1:04:30 – A book by Geoffrey Hodgson and Thorbjørn Knudsen, who are often credited with developing the idea of “generalized Darwinism” in the social sciences.  1:12:00 – An article about Google's NotebookLM podcast-like audio summaries. 1:17:3 0 – An essay by Ursula LeGuin on children's literature and the Jungian “shadow.”    Recommendations The Secret of Our Success, Joseph Henrich “Machine Behaviour,” Iyad Rahwan et al.   Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.  For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).

Cigars Liquor And More
369 How Many Scenarios does an Auto Driver need to Solve with La Aurora 120th Anniversary and Lost Irish

Cigars Liquor And More

Play Episode Listen Later Apr 8, 2024 52:58


Bill and Darrell discuss the AI trying to solve thousands upon thousands of trolly decisions and some implecations. Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment. They ask for 50,000 senarios and compared the results across 4 LLM models. We're concerned and in our concern smoked a 120th Anniversary La Aurora and drank some Lost Irish.  https://arstechnica.com/ai/2024/03/would-chatbots-make-the-same-driving-decisions-as-us/

Radiolab
Driverless Dilemma

Radiolab

Play Episode Listen Later Sep 15, 2023 41:20


Most of us would sacrifice one person to save five. It's a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy. That's the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did almost 20 years ago, then updated again in 2017. Historically, the questions posed by The Trolley Problem are great for thought experimentation and conversations at a certain kind of cocktail party. Now, new technologies are forcing that moral quandary out of our philosophy departments and onto our streets. So today, we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that still baffle its creators. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams. EPISODE CREDITS  Reported and produced by - Amanda Aronczyk and Bethel HabteOur newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)! Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today. Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org Leadership support for Radiolab's science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.

Coffee Break: Señal y Ruido
Redifusión: Ep187: Evento Tunguska; Dipolo del Electrón y Simetría CP; Moral Machine Experiment; La Pseudociencia de Gwyneth Paltrow

Coffee Break: Señal y Ruido

Play Episode Listen Later Jul 27, 2023 147:05


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Nuevos resultados sobre Tunguska 1908, el mayor impacto registrado en la historia; La cultura de la moral: Cómo al intentar enseñarle a las máquinas aprendemos sobre nuestras inconsistencias; Nuevas cotas a la esfericidad del electrón: buscando violaciones de la simetría CP; El timo de la pseudociencia: Analizan los productos de la marca Goop de Gwyneth Paltrow; Noticias varias: TMT, Kepler, "La Guerra de los Mundos", extinciones recientes. Contertulios: Sara Robisco, Carlos González, Bernabé Cedrés, Carlos Westendorp, Héctor Socas. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. Hosted on Acast. See acast.com/privacy for more information.

We Forgot the Name!
Great Upload Consistency

We Forgot the Name!

Play Episode Listen Later Nov 30, 2022 70:26


Wow, the third episode of this season is finally out! Lethe's so good at consistently uploading podcast episodes. This week, Lethe and Mimir discuss a new superhero movie scheduled to release in the distant future, and a creepy self-driving car that makes eye contact with pedestrians. Mimir tests Lethe with the Moral Machine. Deadpool Teasers Moral Machine Eye Contact Car You can contact us through email at wftnpodcast@gmail.com, on Instagram @we_forgot_the_username, or on Twitter @Crazy_Booknerd Intro Music by WolfBeat from Pixabay. Transition Music by Music For Videos from Pixabay. Weird News Intro Music by Centyś from Pixabay. Background Music by sscheidl, HumanoideVFX, ipsyduckk, Musicalmix2020, Electronic-Senses, QubeSounds, Playsounds, REDproductions, Musictown, and lemonmusicstudio from Pixabay. Sound effects by leenn792, SamsterBirdies, and jacksonacademyashmore from Pixabay, and from freesoundeffects.com.

Uehiro Centre for Practical Ethics
The Moral Machine Experiment

Uehiro Centre for Practical Ethics

Play Episode Listen Later Nov 9, 2022 49:03


In this St Cross Special Ethics Seminar, Dr Edmond Awad discusses his project, the Moral Machine, an internet-based game exploring the ethical dilemmas faced by driverless cars. I describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game enabled us to gather 40 million decisions from 3 million people in 200 countries/territories. I report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. I also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. I discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics. Finally, I describe other follow up work that build on this project.

Tikkie Podcast
Tikkie Podcast S0202 Chatbot Billie

Tikkie Podcast

Play Episode Listen Later Oct 25, 2022 24:56


Pierre had een interessante chat met Billie, de chatbot van Bol.com. Of was het uiteindelijk een mens die hem verder hielp met zijn bestelling? Al associërend op de tekst van de chat komen in deze editie onder andere de Moral Machine, de muzikale mentor en articifical intelligence voorbij. De hamvraag: maakt het wat uit of je door een robot, een mens of een combinatie daarvan geholpen wordt?

Lost in Transportation
Baby you can drive my car

Lost in Transportation

Play Episode Listen Later Dec 6, 2021 20:28


Thème 7 : Innovations technologiques  Depuis la révolution industrielle, l'innovation technologique transforme nos vies, suscitant tantôt l'adhésion tantôt le rejet. Pour les premiers technophiles, elle devait permettre à l'humanité d'accéder au bonheur. Au XXIème siècle, elle est attendue pour améliorer notre quotidien mais en respectant l'environnement. Dans le domaine de la mobilité comme dans les autres domaines, l'innovation technologique doit aujourd'hui répondre aux besoins de la population et aux enjeux de durabilité. Cette série de « Lost in Transportation », s'intéresse à cette question à travers l'analyse de 4 innovations technologiques dans la mobilité. Dans le premier épisode, nous avons parlé des infrastructures de recharge nécessaires à la diffusion du véhicule électrique. La crainte de la panne est un des freins au développement de l'électromobilité. Ce problème peut être résolu par le développement d'infrastructures de recharge adaptées. Celles-ci nécessitent des solutions d'optimisation de l'utilisation du réseau électrique mais la question du prix reste en suspens.  Les deux prochains épisodes s'intéresseront davantage aux aspects numériques avec une analyse des services d'information pour favoriser la mobilité multimodale et les premiers résultats du programme d'incitation au changement de comportement ACTIV.  Dans ce deuxième épisode, nous abordons les enjeux du véhicule autonome pour la mobilité des personnes.  Episode 2 : Baby you can drive my car  Le véhicule autonome est une innovation technologique très présente dans l'imaginaire collectif comme le montrent les œuvres de science-fiction qui imagine une voiture du futur autonome, et souvent volante. Cette technologie passe progressivement du rêve à la réalité à travers plusieurs centaines d'expérimentations dans le monde. La concrétisation de cette innovation soulève un certain nombre de questions. Quel est le niveau de développement du véhicule autonome ? Quelles solutions apporte-t-il pour aux besoins de mobilité ?  Comme nous allons le voir, le véhicule autonome interroge autant les besoins de mobilité que la fabrique de la ville. D'un point de vue pratique, le projet technologique du véhicule autonome doit d'abord devenir un projet de transport par l'appropriation du véhicule par les usagers.  Ensuite, le véhicule autonome aura-t-il réussi s'il permet seulement au conducteur de lire le journal dans les embouteillages ? Ce projet de transport deviendrait un projet de mobilité en proposant un service adapté aux besoins des usagers et au contexte territorial.  Dans tous les cas, comment ce projet de mobilité devient un projet urbain intégré dans un territoire servant son accessibilité et la stratégie de la collectivité territoriale ? Et enfin, qu'est-ce qui est réellement souhaitable ou durable dans le développement du véhicule autonome ? est-ce que cette innovation porte sur le confort ou peut-elle permettre de contribuer au développement de mobilités plus durables ?   Références :  6t-bureau de recherche, 2020, Métropole de Rouen Normandie – ZCR Véhicule Autonome, https://6-t.co/references/zone-a-circulation-restreinte-vehicule-autonome/ K2000, extrait, « Knight Rider TV : Connaissance avec KITT », 29 janvier 2009 https://www.youtube.com/watch?v=Ohi1urgZ8hU Goniot, C., Louvet, N., Chrétien, J., 2020. Véhicules autonomes en ville : de la nécessité d'une expérimentation au service des habitants. Ville Rail et Transport, 636, juin 2020  Ministère de l'écologie, 2020, Stratégie Nationale pour le Développement de la mobilité routière automatiséehttps://www.ecologie.gouv.fr/vehicules-automatises Pichereau, D., 2021, Rapport sur le déploiement européen du Véhicule Autonome  https://www.ecologie.gouv.fr/sites/default/files/rapport%20pichereau.pdf Ordonnance du 14 avril 2021 relative au régime de responsabilité pénale applicable en cas de circulation d'un véhicule à délégation de conduite : https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000043729532 Décret du 29 juin 2021 sur les dispositions applicables aux systèmes de conduite automatisés : https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000043370894 Leclercq, B., 24 octobre 2021, « Mobilités : à Dubaï, plus vite, plus haut, plus fort » dans Libérationhttps://www.liberation.fr/economie/transports/mobilites-a-dubai-plus-vite-plus-haut-plus-fort-20211024_WLPSSF664FA4PBJBTDGB7WTR7Y/ Ministère de l'écologie, Octobre 2021, Développement du véhicule automatisé : état des lieux des actions  https://www.ecologie.gouv.fr/sites/default/files/Point%20d%27%C3%A9tape%20sur%20le%20d%C3%A9veloppement%20de%20la%20mobilit%C3%A9%20automatis%C3%A9e%20et%20connect%C3%A9e%20-%20Octobre%202021.pdf Grisoni, A., Madelenat, J., 2021, Le véhicule autonome :  quel rôle dans la transition écologique des mobilités ? https://www.actu-environnement.com/media/pdf/news-37182-rapport-complet-etude-vehicule-autonome-forum-vies-mobiles-mars-2021.pdf Plateforme en ligne pour l'expérience des machines morales : https://www.moralmachine.net/hl/fr Awad, E., Dsouza, S., Kim, R. et al., 2018, The Moral Machine experiment. Nature 563, 59–64 https://www.nature.com/articles/s41586-018-0637-6 Chaire Logistic City, 2021, Les mobiltiés du e-commerce : quels impacts sur la ville ? https://docplayer.fr/211834831-Les-mobilites-du-e-commerce-quels-impacts-sur-la-ville-welcome-to-logistics-city-n-o.html    

Books 'n Brew
Ep23 - Autonomous Cars: A moral machine?

Books 'n Brew

Play Episode Listen Later Dec 5, 2021 4:43


There is a need to look at the future world driven by autonomous cars from the lens of moral philosophy. This episode takes the famous trolley problem from a mere thought experiment and applies to the real life situations for self driving cars. 

Masters of Privacy (ES)
Angel Cuevas: Nanotargeting en Facebook

Masters of Privacy (ES)

Play Episode Listen Later Oct 24, 2021 36:37


Ángel Cuevas (Doctor, Máster e Ingeniero de telecomunicaciones por la Universidad Carlos III de Madrid, e investigador Ramón y Cajal en la propia institución) ha publicado más de 70 artículos en reconocidas revistas y conferencias internacionales.  Sus principales líneas de investigación se centran en el desarrollo de tecnologías de medidas avanzadas para la auditoría de prácticas potencialmente inapropiadas por parte de las grandes empresas tecnológicas. También desarrolla tecnología y algoritmos para optimizar soluciones en el contexto de publicidad digital.  Ángel fue galardonado con el Premio Emilio Aced 2018 por la Agencia Española de Protección de Datos a la mejor investigación en materia de protección de datos. Referencias: Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data Noticia en Techcrunch sobre el estudio de “nanotargeting” The Moral Machine experiment Ángel Cuevas en Twitter Perfil de Ángel Cuevas en la Universidad Carlos III

Learn Scratch SG
AI: Moral Machine – Judge Driverless Accident Dilemma

Learn Scratch SG

Play Episode Listen Later Oct 12, 2021 1:24


This episode is also available as a blog post: AI: Moral Machine - judge Driverless Accident Dilemma - Karate Coder

Learn Scratch SG
AI: Moral Machine – Evil AI Cartoon

Learn Scratch SG

Play Episode Listen Later Sep 19, 2021 3:45


This episode is also available as a blog post: https://karatecoder.tech/ai-moral-machine-evil-ai-cartoon/

evil cartoons moral machine
Stacja Zmiana
101. Wciągająca rozmowa o przyszłości z dr Edytą Sadowską

Stacja Zmiana

Play Episode Listen Later Jul 25, 2021 59:33


Świetna, pełna energii rozmowa o algorytmach, technologii, robotach, przyszłości edukacji o długowieczności z niezwykłą naukowczynią. Doktor Edyta Sadowska to naukowczyni i futurolożka. Edyta zajmuje się przyszłością naukowo, dydaktycznie i instagramowo. Cieszy się każdą przestrzenią, którą odkrywa wraz z rozgryzaniem trendów i uważnością na zmiany jakie pojawiają się już dziś. Przyszłość i możliwość jej współkreowania to, co ją napędza każdego dnia. Jako wykładowca akademicki pracuje też w obszarze związanym z przyszłością edukacji. Instagram Edyty Sadowskiej – tam znajdziesz ciekawe opowieści o przyszłości: https://bit.ly/3xZ9SyO Przykład myślenia o przyszłości pokazuje raport Natali Hatalskiej – Gdańsk Przyszłości: https://bit.ly/3iN2JLs Polecamy książkę Aleksandry Przegalińskiej i Pawła Oksanowicza – „Sztuczna Inteligencja. Nieludzka, arcyludzka”: https://bit.ly/3y9JlPC Polecamy książkę Natalii Hatalskiej „Wiek paradoksów”: https://bit.ly/3BN2Kbg Polecamy reportaż Magdy Gacek „Zabawy w Boga. Ludzie o magnetycznych palcach”: https://bit.ly/2UIMyH6 Polecam mój Ktip – Jak żyć długo w pandemii mądra flądra: https://bit.ly/3zujnXa Mówię o aplikacji Vika – asystent zdrowego stylu życia, podpowiadającej jak nie tracić swego zdrowia: https://bit.ly/3x95KeA Zapraszam do wysłuchania KTIPa nasze dobrodziejstwo czy zagrożenie, w którym rozwijam temat sztucznej inteligencji: https://bit.ly/3y7khsu Polecam rozmowę z Katarzyna Szymielewicz współzałożycielką i prezeską Fundacji Panoptykon https://bit.ly/3ePIzPN o tym, co się dzieje na styku technologii i człowieka. Ważna lektura, mapa trendów Natalii Hatalskiej: https://bit.ly/3kUUb7Z Robot Sophia i czy chciałbyś siedzieć obok niej? https://bit.ly/3y72cdT Strona robota Sophii: https://bit.ly/2Vb8Q48 Roboty z Boston Dynamics, zajrzyj na ich stronę: https://bit.ly/371AjIc Definicja doliny niesamowitości, o co tu chodzi? Wywołanie nieprzyjemnego uczucia lub odrazy do robota przypominającego wyglądem człowieka: https://bit.ly/3wYkm0d Edyta mówi o The Moral Machine, to platforma do zbierania ludzkiego spojrzenia na decyzje moralne podejmowane przez inteligencję maszynową, taką jak autonomiczne samochody: https://bit.ly/3kXUJdp Tay (bot) na Twitterze, który psuli ludzie i uczyli go obraźliwych sformułowań: https://bit.ly/3iNe1iW Artykuł w The Guardian napisany całkowicie przez AI lub robota, jak ktoś woli. Co on chce nam przekazać? Co chcą nam powiedzieć roboty? Mówi przede wszystkim: nie bójcie się mnie. (artykuł został napisany przez GPT-3, generator języka OpenAI): https://bit.ly/3kPQ11r Pierwszy bot, z którym przyjemnie się rozmawia - Replika AI. Poleca nam Edyta Sadowska: https://replika.ai/ Czy rząd może być zastąpiony przez Sztuczną Inteligencję i dlaczego jest to bardzo zły pomysł? https://bit.ly/3iKtIaF Działania algorytmów powinny być jawne. Stworzony przez resort sprawiedliwości algorytm Systemu Losowego Przydziału Spraw stanowi informację publiczną – uznał Naczelny Sąd Administracyjny: https://bit.ly/3zy3oY1 Polecamy raport „Przyszłość edukacji. Scenariusze 2046”: https://bit.ly/36YX3bV Kurs, który poleca Edyta Sadowska „Elements of AI”: https://bit.ly/3i1U9cI Na końcu Edyta Sadowska mówi o metawersum, więcej możesz o tym przeczytać tutaj: https://bit.ly/3eSzaai Muzyka używana w tym odcinku podcastu pochodzi z albumu: https://bit.ly/3kW0xUO

Indústria 4.0 e Transformação Digital
32. [Sérgio Branco, co-fundador e diretor do ITS] - Ética e inteligência artificial

Indústria 4.0 e Transformação Digital

Play Episode Listen Later Feb 8, 2021 40:59


O Podcast GoEPIK recebe Sérgio Branco, co-fundador e diretor do ITS Rio, para uma discussão sobre ética e inteligência artificial. Dá um play e acesse os links dos episódio:. Moral Machine, MIT https://www.moralmachine.net. Justice with Michael Sandel https://www.youtube.com/playlist?list=PL30C13C91CFFEFEA6. Racist Soap Dispenser https://www.youtube.com/watch?v=YJjv_OeiHmo. Série inglesa Years and Years https://www.bbc.co.uk/programmes/m000539g

Thoughts Feelings Emotions
TBO: The Moral Machine. Are We Horrible People? (Also Video Version Available)

Thoughts Feelings Emotions

Play Episode Listen Later Jan 29, 2021 26:36


Are we horrible people? In this episode, we go through the moral machine to decide who we would save in a runaway self-driving car.  The first episode that has a video to go along with the episode!!! Check out the video on Youtube: https://www.youtube.com/watch?v=rglIxlWeuC4&t=1s&ab_channel=ThoughtsFeelingsEmotionsPodcast Twitch: @WellerItsAboutTime Twitch @frankcomstein Twitter @frankcomstein Youtube: Frankcomstein Gaming That's F R A N K C O M S T E I N Email TFEPOD@gmail.com

horrible moral machine
OBS
Etiska dilemman i vår tid 3: Vilka blir vi när våra val är färdigprogrammerade?

OBS

Play Episode Listen Later Jan 13, 2021 10:04


När algoritmer avgör hur en självkörande bil ska agera, så låser vi etiken i dåtiden. Skribenten och ingenjören Christina Gratorp funderar över den tekniska utvecklingens betydelse för människan. ESSÄ: Detta är en text där skribenten reflekterar över ett ämne eller ett verk. Åsikter som uttrycks är skribentens egna. Ursprunglingen publicerad den 15 oktober 2019. Vem är det som kör egentligen? frågar Musse Pigg sina medresenärer i Walt Disneys kortfilm Mickeys Trailer från 1938. Det är ju jag som kör, svarar Janne Långben med sitt sorgfria höhö, medan bilens övergivna förarsäte gapar tomt genom husvagnens vindruta. Varje gång jag läser om utvecklingen av självkörande bilar hör jag den frasen som ett eko i min skalle. Vem är ansvarig om ingen håller i ratten, och vad innebär etik när tekniken fattar besluten åt oss? När vi bara följer protokollet. Ur ett samhälleligt perspektiv tycks protokollet utgöra en modern dygd. I effektiviseringens namn förlitar vi oss i allt större utsträckning på algoritmiserade beslut, i dag realiserade i datorprogram, där socialarbetaren, handläggaren eller juristen enkelt kan trycka på en knapp för att få fram ett beslut för sitt ärende. Vem får bostadsbidrag, hemtjänst eller böter? Besluten finns i ökande grad inkodade i programvaror utvecklade av tekniker, långt ifrån de verksamheter som utgör vårt sociala skyddsnät och de institutioner som vårt gemensamma samhälle vilar på. Om den globala trenden håller i sig innebär det att inrättningar som socialkontoret och åklagarmyndigheten på sikt inte behöver befolkas av socionomer och jurister utan kan reduceras till platser där någon trycker på en knapp. Möjligheten att tolka lagar och regelverk utifrån nya samhälleliga villkor försvinner morgondagens beslut baseras på gårdagens föreställningar. För det är just vad ett algoritmiserat beslut innebär. Till skillnad från tekniska hjälpmedel som ett räkneprogram eller en vädersimulator, är program som används för att svara på hur vi ska agera något som binder oss till ett beteende. Om vi med teknikens hjälp kan förutspå vilken dag det kommer regna bestämmer det ingenting åt oss. För bonden kan regnet vara välkommet, för semesterfiraren troligtvis inte. Datorprogrammet Compas däremot, som används i domstolar i USA för att förutsäga vem som med stor sannolikhet kommer att bli en återfallsförbrytare, fungerar determinerande. Det knyter inte bara en individs öde till andras beteenden i en redan passerad dåtid, utan avskaffar också medmänniskans blick på individen som just sin egen. För den åtalade vars livsöde står på spel innebär det både att kategoriseras utifrån föreställningar om grupptillhörighet, som klass, kön och etnicitet, och samtidigt att berövas synen på sin domare som en egen, etisk instans. Överfört på en större del av samhället är frågan vilka vi blir för varandra om vi inte längre kan betrakta våra medmänniskor som etiska aktörer. Vem är det jag möter om det inte är hon själv som agerar och vad gör det med min syn på henne? Och vad innebär det i sin tur för min egen förmåga att handla etiskt att betrakta min nästa som förutbestämd? Den moderna byråkratins målrationella tänkande att bara utföra de uppgifter vi åläggs var enligt sociologen Zygmunt Bauman en förutsättning för andra världskrigets terror. Försvaret för de algoritmiserade besluten utgår ofta från argument om rättssäkerhet och likabehandling. Genom att eliminera den så kallade mänskliga faktorn ska inga fördomar slingra sig in, ingen behandlas orättvist. Den juridiska lagen upphöjs till naturvetenskap och ges samma status som gravitationen eller ljusets konstanta hastighet som om lagens ursprung var en objektiv kunskapskälla, i sig fri från fördomar och statisk över tid. I själva verket har det visat sig att det som på pappret kan verka rättssäkert i praktiken ofta fått motsatt effekt. Datorprogrammet Compas har till exempel visat sig dra slutsatser i linje med rasistiska fördomar, vilket både beror på programkodens design och på den statistik som används. Men att se etiskt på sin omvärld är att värja sig mot tron att allt går att reducera till fakta. Kunskap är att äta, men människan får aldrig förbrukas som vetenskap; vi får inte göra den Andre till det Samma genom att tugga och svälja henne. Vår syn på detta andra som inte får objektifieras kategoriseras, katalogiseras, kartläggas är enligt moralfilosofen Emmanuel Lévinas avgörande för vår möjlighet att tänka bortom det totalitära. Att möta en annan människa är att hållas vaken av en gåta säger han, och pekar på att det oförutbestämda också bär möjlighetens frö. Kanske mot helt nya samhällsordningar, kanske mot en hjälpande hand i en stund då man minst förväntar sig en. Kanske kan man kalla det en tillåtelse att frångå protokollet. Men kan inte ett automatiserat beteendet vid sidan av argument om effektivitet och ekonomisk vinning också vara av godo? För att ställa frågan på sin spets kan den självkörande bilen statuera exempel. Dessa skulle, sägs det, kunna rädda tusentals liv. Det amerikanska lärosätet MIT Massachusetts Institute of Technology har ett interaktivt test på sin hemsida. Testet heter Moral Machine och går ut på att jag som användare utifrån ett visst antal situationer ska avgöra hur en självkörande bil bör agera. Jag startar testet. Ett blått fordon och några röda människofigurer dyker upp. Det är inte utan obehag som jag tvingas peka och klicka på vem av dem som måste dö. I scen efter scen väljer jag sedan mellan gångtrafikanter, passagerare och djur. När testet är klart inser jag att det i varje scen funnits mer information än vad jag först trodde. Av sammanfattningen framgår inte bara hur många passagerare och gångtrafikanter jag dödat, utan också hur många män, kvinnor, vältränade, överviktiga och personer som beskrivs ha högt respektive lågt socialt värde som jag låtit stryka med. Detta är den verklighet programmeraren ställs inför. Utan denna kategoriserande verksamhet går det helt enkelt inte att skapa ett program. Etikens fråga sliter sig från nuet och forslas bakåt till en annan beslutsfattande instans. Ett sätt att se det är att framtiden blir låst där kan helt enkelt inte ny information påverka situationen. Det jag möter i trafiken är en kunskap producerad i en svunnen tid en redan uppäten bit av världen. Men om det räddar liv? På ett filosofiskt plan ställer frågan i stort en föreställning om ett utfall mot ett avsägande av oss själva som aktörer med etisk agens. För att ekvationen om etik ska gå ihop måste dock framtiden vara obestridligt ny, vilket bara kan ske om den också samtidigt är o-förutbestämd. Lévinas liknar ett nu där allt hopp om förnyelse har gått förlorat med ett tillstånd av sömnlöshet: där råder förvisso vakenhet men det är alltid samma nu eller samma förflutna som varar. Vakenheten blir dröm, en o-personlig medvetenhet. I fallet med den självkörande bilen är det kanske rättvisare att inte ställa den mot en bil med chaufför, utan mot ett samhälle där vi strävar bort från bilism. Inspirerade av en annan moralfilosof, Immanuel Kant, skulle vi också kunna grubbla över intentionen med resan. Kan man kalla en maskin moralisk utan att samtidigt betrakta den struktur den opererar i? Är ett samhälle uppbyggt kring förbestämda beslut en plats där etiken är möjlig? Frågan om vem som kör verkar inte räcka till. Vi måste också fråga oss: vart är vi på väg? Christina Gratorp, skribent och ingenjör

AI The New Sexy
15 | El dilema de la movilidad y los coches autónomos

AI The New Sexy

Play Episode Listen Later Oct 1, 2020 35:26


En este episodio hacemos un recorrido por todas las implicaciones que conlleva el tema de la movilidad. Platicamos de los avances en coches autónomos de empresas como Toyota y Nissan y cuestionamos el diseño de estos nuevos autos, reflexionamos sobre cómo reaccionarían personas de diferentes países al experimento Moral Machine del MIT Media Lab y finalmente platicamos de las aplicaciones de la inteligencia artificial en la micro-movilidad. Twitter: www.twitter.com/AITheNewSexy Instagram: www.instagram.com/AITheNewSexy Facebook: www.facebook.com/AITheNewSexy

ReConnected
Episode 4 - The Real Moral Machine

ReConnected

Play Episode Listen Later Dec 20, 2019 123:27


Matt, Josh and Lewis make a mistake.Shownotes: Dark Souls Bosses"Unnaturally Multicultural" Video Juror Rules Juror Payments The Trolley Problem Multitrack Drifting Moral Machine Utilitarianism 01:45:27 Guitar Hero Guitar

moral machine
En Flott Podcast
#54 Brekningsmiddelmyteexposé

En Flott Podcast

Play Episode Listen Later Nov 3, 2019 59:43


Moral Machine av MIT  

mit moral machine
Podcast do Andarilho
Dilemas éticos e máquina moral (The Moral Machine) - Minicast #3

Podcast do Andarilho

Play Episode Listen Later Oct 26, 2019 30:38


Se a sua intervenção pudesse salvar 5 pessoas ao matar uma, e sua omissão salvasse uma pessoa enquanto outras 5 morreriam, o que você faria? Vem conversar sobre ética, moral e carros autônomos!

Les Friday Lives
Moral Machine : le dilemme moral de l'Intelligence Artificielle

Les Friday Lives

Play Episode Listen Later Aug 21, 2019 5:37


Un chercheur du CNRS du nom de Jean-François Bonnefon a mis en place une étude au niveau mondial du nom de "Moral Machine". Le but de cette étude ? Explorer encore davantage nos connaissances sur l'intelligence artificielle et plus précisément sur leur moralité, une valeur que l'on prête uniquement au genre humain. Alain Garnier revient sur cette étude et sur ses résultats particulièrement intéressants à travers ce nouveau Friday Live.

Dark Stuff: With Christian & Suann
077: The Moral Machine & Jailhouse Clock

Dark Stuff: With Christian & Suann

Play Episode Listen Later Jun 24, 2019 61:48


This week Christian puts Suann to the test with a would you rather that goes on forever, we learn a lot about Suann, mostly that she values puppy lives over human lives. That’s all.

Podcast 2030
22 - To veer, or not to veer

Podcast 2030

Play Episode Listen Later May 31, 2019 29:08


We're happy to bring you an interview with researcher Edmond Awad, who worked on creating Moral Machine, an online platform which presents users with moral dilemmas. Users are required to make decisions about how an autonomous vehicle should behave, when an accident is unavoidable. Should the car kill the two passengers inside or the five pedestrians crossing the street? (Go on, try it out for yourself!) We discuss how this experiment was set up and analyze the results, after collecting more then 40 million moral decisions. Apart from being fascinating, these results are also very relevant, as self-driving cars could be driving down our roads as soon as next year, in 2020 (according to Tesla's ambitious plans to deploy a robotaxi fleet). The crowdsourced data collected by Moral Machine and similar projects will help further the discussion on how to develop software for artificial intelligence.

tesla users veer moral machine
Mercurio
#3 - Deficienza reale, intelligenza artificiale

Mercurio

Play Episode Listen Later May 25, 2019 9:34


Tranquillo Mercurio, so che le prime due puntate ti hanno demoralizzato, ma oggi ti mostrerò che l'umanità non ha soltanto difetti. Parleremo dell'intelligenza artificiale!(Ma sarà davvero così intelligente?)++ ISCRIVITI AL CANALE TELEGRAM► https://t.me/mercuriopodcast++ CAPITOLI DELL'EPISODIO(01:15) Moral Machine(04:22) Taroxa(04:44) Test di Turing(06:21) Difetti dell'IA(06:57) Ortofrutta(07:38) Giappo-Robot(08:28) Gran Finale++ CONTATTIVogliamo stare vicini vicini?Ecco dove puoi trovarmi e magari farmi sapere cosa ne pensi della puntata!Mi farebbe davvero piacere :)► Instagram: https://www.instagram.com/mercuriopodcast/ ► Twitter: https://twitter.com/MercurioPodcast ► Blog: http://mercuriopodcast.com

Puzsér Podcast | Önkényes Mérvadó
Önkényes Mérvadó különkiadás gyermekekről és idősekről

Puzsér Podcast | Önkényes Mérvadó

Play Episode Listen Later Apr 4, 2019


A Moral Machine-ről készült adásunkhoz hasonló téma: Könnyű rávágni, hogy vajon a 4 éves gyerek, vagy az életéből még 4 hátralévő évet számláló idős élete a nagyobb veszteség. De mi van, ha mégis feszegetjük és forgatjuk? És ha már itt tartunk, téged a gyerekek zavarnak jobban, vagy az idősek? A válaszodból talán kiderül, hová tartozol jobban. Cenzúrázatlan online különkiadás 2018 temetésének tiszteletére Csízi Ágival, Horváth Oszkárral és Puzsér Róberttel. 2018.12.31. (Felvétel: 2018.12.20.)

Philosophy Un(phil)tered
Azim Shariff: The Moral Machine Experiment and Autonomous Vehicles

Philosophy Un(phil)tered

Play Episode Listen Later Mar 6, 2019


In this episode Anika Kuchukova, an undergraduate at Duke Kunshan University, and I interview Azim Shariff, an Associate Professor of Psychology at the University of British Columbia, on the Moral Machine Experiment and autonomous vehicles.

Café com Dungeon
#192 - RPG e Utilitarismo

Café com Dungeon

Play Episode Listen Later Dec 13, 2018 30:57


Neste cafezinho, Heitor traz a Balbi o famoso Trolley Problem e questões utilitaristas para o RPG e analisa a liberdade típica dos RPGs quase como uma negação desta questão. Daí passam a pegar firma na filosofia de padoca e esticam essa brincadeira à logica do Apocalypse Engine, carros inteligentes e, claro, Revolução! Heitor Coelho é doutor em filosofia e seus trabalhos e estudos pode ser encontrado aqui; seu curriculum lattes aqui.E aqui eis o link para a pesquisa do MIT, Moral Machine, citada no episódio.     As músicas utilizadas no episódio foram "Faster Does It" de Kevin MacLeod e "Language of my Reality" por Omnivista. _________________________ Nosso canal é o Regra da Casa. Jogos ao vivo no Twitch toda terça às 22h30 e quarta às 21h Encontre a gente também nas redes sociais.Instagram, Twitter e Facebook do Regra da Casa!

AI with AI
AI with AI: But Is It Art(ificial)?

AI with AI

Play Episode Listen Later Nov 23, 2018 28:10


In the latest news, Andy and Dave discuss Microsoft’s announcement that it will sell artificial intelligence and other advanced technology to the Pentagon; Google is giving $25M to projects that use artificial intelligence for humanitarian projects; Stanford announces the Human-Centered AI initiative; AdaNet offers fast and flexible AutoML with “learning guarantees;” and a “human brain” supercomputer (using neuromorphic computing) with 1 million processors is switched online for the first time. In other stories, Andy and Dave discuss the AI-generated portrait that sold at a Christie’s auction for $432,500. MIT Media Lab announces the results of their “Moral Machine” experiment, which asked people around the globe to choose how a self-driving vehicle should behave in different moral dilemmas. And GoogleAI describes its “fluid annotation” method, an exploratory machine language-powered interface for faster image annotation.

Coffee Break: Señal y Ruido
Ep187: Evento Tunguska; Dipolo del Electrón y Simetría CP; Moral Machine Experiment; La Pseudociencia de Gwyneth Paltrow

Coffee Break: Señal y Ruido

Play Episode Listen Later Nov 1, 2018 146:40


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Nuevos resultados sobre Tunguska 1908, el mayor impacto registrado en la historia; La cultura de la moral: Cómo al intentar enseñarle a las máquinas aprendemos sobre nuestras inconsistencias; Nuevas cotas a la esfericidad del electrón: buscando violaciones de la simetría CP; El timo de la pseudociencia: Analizan los productos de la marca Goop de Gwyneth Paltrow; Noticias varias: TMT, Kepler, "La Guerra de los Mundos", extinciones recientes. En la foto, de arriba a abajo y de izquierda a derecha: Sara Robisco, Carlos González, Bernabé Cedrés, Carlos Westendorp, Héctor Socas. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. CB:SyR es una colaboración entre el Área de Investigación y la Unidad de Comunicación y Cultura Científica (UC3) del Instituto de Astrofísica de Canarias.

Coffee Break: Señal y Ruido
Ep187: Evento Tunguska; Dipolo del Electrón y Simetría CP; Moral Machine Experiment; La Pseudociencia de Gwyneth Paltrow

Coffee Break: Señal y Ruido

Play Episode Listen Later Nov 1, 2018 146:40


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Nuevos resultados sobre Tunguska 1908, el mayor impacto registrado en la historia; La cultura de la moral: Cómo al intentar enseñarle a las máquinas aprendemos sobre nuestras inconsistencias; Nuevas cotas a la esfericidad del electrón: buscando violaciones de la simetría CP; El timo de la pseudociencia: Analizan los productos de la marca Goop de Gwyneth Paltrow; Noticias varias: TMT, Kepler, "La Guerra de los Mundos", extinciones recientes. En la foto, de arriba a abajo y de izquierda a derecha: Sara Robisco, Carlos González, Bernabé Cedrés, Carlos Westendorp, Héctor Socas. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. CB:SyR es una colaboración entre el Área de Investigación y la Unidad de Comunicación y Cultura Científica (UC3) del Instituto de Astrofísica de Canarias.

Squabble
The Self-Driving Car Dilemma - Ep. 14

Squabble

Play Episode Listen Later May 28, 2018 38:26


Self-Driving cars are becoming more and more popular. But what should the car if it has to make a instant moral choice? That's what Jacob and Carter discuss using the Moral Machine.Take the Moral Machine test yourself: http://moralmachine.mit.edu/We're still new to podcasting. However, if you like where we're headed subscribe, follow, like, comment, and share! We appreciate it!Help us create MORE content:Patreon: https://www.patreon.com/squabblepodcastFollow us:Squabble on Facebook: www.facebook.com/squabbleSquabble on Instagram: @squabblepodcastListen and/or watch:Our website: www.squabblepodcast.comiTunes: https://itunes.apple.com/us/podcast/squabble/id1354064046?mt=2SoundCloud: https://soundcloud.com/user-861436839YouTube: https://www.youtube.com/watch?v=UY2U3qXSgbECredits:Hosts: Jacob and Carter AndrewsMusic: "Funk Game Loop" Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 3.0 Licensecreativecommons.org/licenses/by/3.0/

Talking Fail
54. You Gunna Cry About It?

Talking Fail

Play Episode Listen Later Apr 14, 2018 44:41


Ethan's laptop died so Aanna took one for the team and filled in for him. We talk about the Westworld Spoiler Video/spoilers in general, and we play the Moral Machine to choose who to kill. http://www.talking.fail/ Support us: https://www.patreon.com/talkingfail Facebook: https://www.facebook.com/TalkingFail/ Twitter: https://twitter.com/TalkingFail D&D Podcast that Tyler plays on: Podculture Plays D&D http://nerdythingspod.com/podcast/podculture-plays-dd-episode-1-cast-detect-evil-on-the-dm/ Music Podcast with Tyler & Brian Matthews: The Discgographers thediscographers.simplecast.fm Ethan & Will's YouTube show: Cruisin' Craigslist https://www.youtube.com/watch?v=h4oGzc8Z5yU&list=PLd1TqWvZfFdA0m-EB1uPLoP1XCyFEzF_b

Positive Feedback Loop
Ep. 40: Autonomous Morality

Positive Feedback Loop

Play Episode Listen Later Oct 2, 2017 48:08


Episode notes: MIT’s Moral Machine test Who would you save? Fate and interference, guilt Kant’s autonomous morality Utilitarianism Patience with people vs. machines MBTA and Amtrak stories Consumer comfort with driverless cars Blame in autonomous vehicle accidents The future of machine autonomy Applying moral frameworks in the context of changing technology

Radiolab
Driverless Dilemma

Radiolab

Play Episode Listen Later Sep 26, 2017 40:01


Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy. That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did about 11 years ago. Luckily, the Trolley Problem has always been little more than a thought experiment, mostly confined to conversations at a certain kind of cocktail party. That is until now. New technologies are forcing that moral quandry out of our philosophy departments and onto our streets. So today we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that we can’t even figure out ourselves. This story was reported and produced by Amanda Aronczyk and Bethel Habte. Thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams. Support Radiolab today at Radiolab.org/donate.  

Berkman Klein Center for Internet and Society: Audio Fishbowl
Joi Ito and Iyad Rahwan on AI & Society

Berkman Klein Center for Internet and Society: Audio Fishbowl

Play Episode Listen Later Apr 13, 2017 69:06


AI technologies have the potential to vastly enhance the performance of many systems and institutions, from making transportation safer, to enhancing the accuracy of medical diagnosis, to improving the efficiency of food safety inspections. However, AI systems can also create moral hazards, by potentially diminishing human accountability, perpetuating biases that are inherent to the AI's training data, or optimizing for one performance measure at the expense of others. These challenges require new kinds of "user interfaces" between machines and society. We will explore these issues, and how they would interface with existing institutions. About Joi Ito Joi Ito is the director of the MIT Media Lab, Professor of the Practice at MIT and the author, with Jeff Howe, of Whiplash: How to Survive Our Faster Future (Grand Central Publishing, 2016). Ito is chairman of the board of PureTech Health and serves on several other boards, including The New York Times Company, Sony Corporation, the MacArthur Foundation and the Knight Foundation. He is also the former chairman and CEO of Creative Commons, and a former board member of ICANN, The Open Source Initiative, and The Mozilla Foundation. Ito is a serial entrepreneur who helped start and run numerous companies including one of the first web companies in Japan, Digital Garage, and the first commercial Internet service provider in Japan, PSINet Japan/IIKK. He has been an early-stage investor in many companies, including Formlabs, Flickr, Kickstarter, littleBits, and Twitter. Ito has received numerous awards, including the Lifetime Achievement Award from the Oxford Internet Institute and the Golden Plate Award from the Academy of Achievement, and he was inducted into the SXSW Interactive Festival Hall of Fame in 2014. Ito has been awarded honorary doctorates from The New School and Tufts University. About Iyad Rahwan Iyad Rahwan is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD from the University of Melbourne, Australia, and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation, and the social aspects of Artificial Intelligence. His team built the Moral Machine, which has collected 28 million decisions to-date about how autonomous cars should prioritize risk. Rahwan's work appeared in major academic journals, including Science and PNAS, and was featured in major media outlets, including the New York Times, The Economist, Wall Street Journal, and the Washington Post. More info on this event here: https://cyber.harvard.edu/events/luncheons/2017/04/Ito

Two Bit Geeks
Episode 03: To Infinity (and the Stroller)

Two Bit Geeks

Play Episode Listen Later Dec 18, 2016 43:33


Ped flips the script on a telemarketer, Tom embraces his inner athlete, exploring the value of exploring space, assessing the relative worth of shrimp and Daleks and losing one’s composure with The Moral Machine. Discuss the podcast on Reddit. Support the podcast on Patreon. Consciously crafting positive human interactions [00:00] The problem with the Do-Not-Call List Tom’s first 5K (with actual real people) [03:45] C25K Deep Cut: growing tomatoes in Plato’s Cave Tom’s preferred transport method Should we explore Space? [16:50] NASA, The President and Stephen Hawking all seem to think so SpaceX: Elon Musk’s ambitious effort to colonize Mars by 2024 Odds of an asteroid hitting you (and tips on avoiding it) Assessing the value of shrimp and Daleks Putting our best foot forward Self-driving cars and The Moral Machine [30:47] The Moral Machine A novel solution to the trolley problem The effect of podcasts on young children Music by Lee Rosevere (CC by 4.0)

Ten Tenths Podcast
#42 - Two 9s And a 6 are Crossing the Road

Ten Tenths Podcast

Play Episode Listen Later Nov 6, 2016 88:52


In this episode, the guys take MIT's the "Moral Machine" survey, which is intended to determine how humans make moral decisions while driving. Instead, the show goes down some very odd roads including one scenario where they are forced to kill Dan Bilzerian.As always, be sure to check out the show notes and our store at www.tententhspodcast.com

Ten Tenths Podcast
#42 - Two 9s And a 6 are Crossing the Road

Ten Tenths Podcast

Play Episode Listen Later Nov 6, 2016 88:51


In this episode, the guys take MIT's the "Moral Machine" survey, which is intended to determine how humans make moral decisions while driving. Instead, the show goes down some very odd roads including one scenario where they are forced to kill Dan Bilzerian.As always, be sure to check out the show notes and our store at www.tententhspodcast.com

The Undefined Gen
014: Kelsey Atherton: Demystifying Drones, the Morality of Automation, and the Future of Warfare

The Undefined Gen

Play Episode Listen Later Oct 11, 2016 67:00


This week I talk to Kelsey Atherton a staff writer at Popular Science Magazine. He primarily covers unmanned vehicles and defense technology. First we look at drones and different development projects being worked on as well as current and potential legislation surrounding drones. Then we move into automated technology and discuss moral dilemmas that come with programming machines to think for themselves using an example by MIT called the Moral Machine. Finally, we get into defense technology and some old weapons that are relevant today as well as future technologies. Kelsey also weighs in on some ethical questions of warfare.

IT-Keller
ITK016 Der Trend geht wieder in Richtung Buchstaben

IT-Keller

Play Episode Listen Later Oct 9, 2016 129:08


Themen: Atom Text Editor, Electron Cross Platform Framework für Desktop Apps, PowerShell ist Open Source, Nosulus Rift, Kerzenduft "Neuer Mac", Gerüchte über Apple Store in Wien, iTunes Alternative iTools, ZEIº Time Tracking Würfel, Vello Bike+, Lilium Aviation, AeroMobil, Bodeneffektfahrzeug, Kaspisches Seemonster, Self driving cars dilemma Moral Machine, Rosetta Mission ist zu Ende, 67P/Churyumov-Gerasimenko, Funktionale Programmierung, elixir, Phoenix web framework, Layer 8 Podcast, Panoptikum Podcast discovery and community, Subscribe 8 vom 14.-16.10.2016 in München, DevFest Vienna 2016 vom 5.-6.11.2016, Privacy Week in Wien vom 24.-30.10.2016, Chatbot, WeChat, Turing-Test, Watson As A Service, Microsoft Cognitive Services APIs, The AI Revolution: The Road to Superintelligence, AKG schließt Werk in Wien, Aua-uff-Code! Podcast, Vienna Beamers (Twitter), Emoji, Emoji Programming Language (Emojicode), Isotype, Otto Neurath, Biertaucher Podcast Gäste: Bernhard, Sindre und Stefan (Twitter)

Diciendo Charradas - La Patada FM
Diciendo Charradas #5: La Máquina Moral

Diciendo Charradas - La Patada FM

Play Episode Listen Later Sep 16, 2016


Enrique y Jorge hablan sobre la psicología detrás de las listas de top 10, después prueban The Moral Machine donde se juegan que la gente con sobrepeso o que tenga mascotas en casa dejen el podcast para siempre y rematan la jugada hablando de un karaoke porno.¡No os olvidéis que podéis encontrar en podcast en iTunes y iVoox!PATROCINIO Cómpranos un café NOTASWatchMojo.com en YoutubeThe Moral Machine en MIT

mit moral quina moral machine
Glitch Podcast
11: Jeroen Disch (Grrr)-Klantgericht als ouders op een schoolfeest

Glitch Podcast

Play Episode Listen Later Aug 18, 2016 90:23


"Moderne organisaties willen op gelijke voet staan met hun publiek. Maar meestal staan ze langs de zijlijn als ouders op een schoolfeest.” Jeroen Disch, Design Director bij grrr.nl vertelt hoe hij het publiek van z’n opdrachtgevers wel centraal stelt in z’n proces. Hij werkt met zijn ontwerpteam aan UX, visual identities en campagnes voor klanten als Artsen zonder Grenzen, het Haags Filmhuis, het Nederlands Instituut voor Beeld en Geluid. Jeroen Disch studeerde aan de Hogeschool voor de Kunsten Utrecht en het Sandberg Instituut. Hij heeft meer dan 10 jaar ervaring als ontwerper en creatief strateeg, bij oa. LAVA en Edenspiekermann. Glitch is een podcast over en met makers uit de wereld van design, technologie en media: glitch.show Deze aflevering wordt mede mogelijk gemaakt door Edenspiekermann. Een leuke plek om te werken!jobs.edenspiekermann.com Vind je Glitch leuk? Laat het anderen weten! Deel ons op je favoriete sociale netwerk of schrijf een recentie in iTunes. Of mail ons: redactie@glitch.show We hadden het over: -Moral Machine van MIT en Google tr.im/moralmachine ⁃IA chatbot menselijker maken door menselijke input tr.im/itskoko ⁃Snapchat en Instagram Stories ⁃Pikachu vangt mensen tr.im/basel ⁃Reinier geblockt door Wilders ⁃Air BnB is een designstudio begonnen tr.im/samara ⁃Wat doet Grrr? tr.im/grrr ⁃Hoe werkt Design en Scrum met elkaar? ⁃Een identiteit scrummen ⁃Hoe ga je om met externe Product Owners ⁃Open API voor schooldata ⁃Multidisciplinair werken ⁃Pistool emoji Apple ⁃Emoji domeinnamen http://???.tk ⁃Tattoo robot tr.im/tattoorobot ⁃Geld stelen van Instagram, Microsoft en Google ⁃Security bij internet of Things tr.im/iotsecurity ⁃Waar kun je het beste inbreken? tr.im/inbreken ⁃Stranger Things met vrije uitloop kinderen tr.im/scharrelkids ⁃Tokyoifying the world tr.im/tokyoifying ⁃Wat maakt Grrr Grrr -Grrr maakt Fietsy tr.im/fietsy Coverart: Linda Tetteroo tr.im/Linda Muziek: BigOrange Music tr.im/bigorange Special Guest: Jeroen Disch.