Podcasts about mit moral machine

  • 4PODCASTS
  • 5EPISODES
  • 37mAVG DURATION
  • ?INFREQUENT EPISODES
  • Nov 5, 2019LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about mit moral machine

Latest podcast episodes about mit moral machine

AI in Education Podcast

This week Dan and Ray go in the opposite direction from the last two episodes. After talking about AI for Good and AI for Accessibility, this week they go deeper into the ways that AI can be used in ways that can disadvantage people and decisions. Often the border line between 'good' and 'evil' can be very fine, and the same artificial intelligence technology can be used for good or evil depending on the unwitting (or witting) decisions! During the chat, Ray discovered that Dan is more of a 'Dr Evil' than he'd previously thought, and together they discover that there are differences in how people perceive 'good' and 'evil' when it comes to AI's use in education.  This episode is a lot less focused on the technology, and instead spends all the time focused on the outcomes of using it. Ray mentions the "MIT Trolley Problem", which is actually two things! The Trolley Problem, which is the work of English philosopher Philippa Foot, is a thought experiment in ethics about taking decisions on diverting a runaway tram. And the MIT Moral Machine, which is built upon this work, is about making judgements about driverless cars. The MIT Moral Machine website asks you to make the moral decisions and decide upon the consequences. It's a great activity for colleagues and for students, because it leads to a lot of discussion. Two other links mentioned in the podcast are the CSIRO Data61 discussion paper as part of the consultation about AI ethics in Australia (downloadable here: https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/) and the Microsoft AI Principles (available here: https://www.microsoft.com/en-us/AI/our-approach-to-ai)

Post Modern Family Living
A.I. & cars that will kill us.

Post Modern Family Living

Play Episode Listen Later Feb 13, 2018 20:00


MIT Moral Machine is a simulator extracts our moral instincts. Let’s discuss how moral it is.

cars kill us mit moral machine
Driverless Radio
13. Driverless Cars Will Decide Who Survives in a Crash—Why I Hate The Trolley Problem 13. Driverless Cars Will Decide Who Survives in a Crash—Why I Hate The Trolley Problem - Driverless Radio

Driverless Radio

Play Episode Listen Later Feb 6, 2018 18:48


For almost 100 years, scholars have been debating the Trolley Problem. The scenario is simple.  In its original form, a trolley or train was speeding down a set of railroad tracks uncontrollable and unable to break. Ahead of the trolley is a Y-intersection. On one fork of the intersection, 5 innocent people are tied up on the tracks. On the other side of the intersection, a child is tied up on the tracks. At the intersection, a man could allow the trolley to continue its course, undoubtedly killing 5 innocent people, or the man can divert the train and kill only the one child.... a child that happens to be his own flesh and blood. Should the man sacrifice his own child to save the lives of 5? This scenario, or others like it, have been used in philosophy and ethics classes for decades. Most recently, institutions like MIT have updated the problem to include driverless car scenarios. The MIT Moral Machine lets users make decisions as if they were an autonomous car. Over the course of many different binary, one-or-the-other scenarios, users can select if they would rather kill a grandma or a dog, a man or a group of cats, a bank robber or a doctor, etc. I absolutely hate the trolley problem and all its incarnations.  The trolley problem was originally designed to provoke ethical discussions in human decision making. It should be left in ethics classes and left out of engineering discussions. In its original form, there is nothing natural or ethical about the scenario of an out of control trolley barreling down the track towards a group of five people tied up.  It would be impossible to predict how anyone would act in that situation. The real focus should not be on the decision to direct the train, but instead, the focus should be on the sadistic bastard that cut the train's brakes, tied up six people to the track, and coerced a person to choose between the life or death of their child. That is a seriously screwed up criminal. Let's talk about criminal ethics instead of autonomous ethics. Just how it is impossible for the trolley problem to every happen without a criminal intent, it is also impossible for a car to be traveling down the street and need to decide if it wants to kill a group of five old ladies or a group of five bank robbers. I hate the trolley problem for three main reasons. #1. Attempting to apply the trolley problem to teach or evaluate autonomous cars demonstrates a fundamental lack of understanding of machine learning and artificial intelligence. These cars will have many different sensors. They are looking for obstacles. They will never be programmed to rank order the value of old ladies versus children. To a car, they are both obstacles that will be sensed by a variety of cameras, radar, lidar, etc. This data feeds into an artificial intelligence machine that uses complex algorithms and probabilistic models to select the best course of action. They will almost never have to make simple binary either-or, life-or-death decisions as postulated in the trolley problem. #2. The chances of a car getting into a situation where it needs to decide in a split second which of two obstacles it will hit are nil without criminal intent. #3. In almost all scenarios, the best, most ethical, simplest, safest course of action for a driverless car will be to brake. It is that simple. If a car cannot navigate around an obstacle, it should brake. These cars will have reaction times and sensors that exceed human capability (I saw this first hand in Las Vegas). They will be able to sense obstacles long before humans and will almost always have the ability to brake before hitting one. Braking will reduce the speed and increase the chance of survival of the occupants. Swerving to hit another obstacle would be stupid regardless of what that obstacle was. #EndRant Now, I never want to talk about the Trolley Problem ever again. If you equally detest the Trolley Problem,

Tech Talk Radio Podcast
October 29, 2016 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Oct 29, 2016 58:46


TinyURL explained (used to shorten web addresses), Google Home vs Amazon Echo (war for home dominance), Chromebook vs Windows laptop (selection depends on need for installed apps), checking Facebook account activity (a good practice after travelling), Profiles in IT (Gabe Logan Newell, co-founder of Valve Video Games), Google privacy policy changed (linked account with DoubleClick cookies, possible to opt out), BleachBit file destruction (Hillary made it famous), IBM deploying 100,000 Macs (fewer helpdesk tickets than with PCs), IoT Botnet available on the Dark Web (7,500 dollars for 100,000 bots), and MIT Moral Machine (applied to autonomous car scenarios). This show originally aired on Saturday, October 29, 2016, at 9:00 AM EST on WFED (1500 AM).

Tech Talk Radio Podcast
October 29, 2016 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Oct 29, 2016 58:46


TinyURL explained (used to shorten web addresses), Google Home vs Amazon Echo (war for home dominance), Chromebook vs Windows laptop (selection depends on need for installed apps), checking Facebook account activity (a good practice after travelling), Profiles in IT (Gabe Logan Newell, co-founder of Valve Video Games), Google privacy policy changed (linked account with DoubleClick cookies, possible to opt out), BleachBit file destruction (Hillary made it famous), IBM deploying 100,000 Macs (fewer helpdesk tickets than with PCs), IoT Botnet available on the Dark Web (7,500 dollars for 100,000 bots), and MIT Moral Machine (applied to autonomous car scenarios). This show originally aired on Saturday, October 29, 2016, at 9:00 AM EST on WFED (1500 AM).