Podcasts about responsible robotics

  • 12PODCASTS
  • 14EPISODES
  • 35mAVG DURATION
  • ?INFREQUENT EPISODES
  • Aug 24, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about responsible robotics

Latest podcast episodes about responsible robotics

The Dissenter
#825 Mark Coeckelbergh - Self-Improvement: Technologies of the Soul in the Age of Artificial Intelligence

The Dissenter

Play Episode Listen Later Aug 24, 2023 47:40


------------------Support the channel------------ Patreon: https://www.patreon.com/thedissenter PayPal: paypal.me/thedissenter PayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuy PayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9l PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao   This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/   Dr. Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the Department of Philosophy at the University of Vienna. He is member of the High Level Expert Group on Artificial Intelligence for the European Commission, the Rat für Robotik, as well as member of the Technical Expert Committee (TEC) for the Foundation for Responsible Robotics. His work is focused on the area of philosophy of technology, in particular the ethics of robotics and ICTs. He is the author of several books, including his most recent one, Self-Improvement: Technologies of the Soul in the Age of Artificial Intelligence.   In this episode, we focus on Self-Improvement. We start by talking about the philosophical history behind the culture of self-improvement. We discuss how we present ourselves on social media, and the idea of “authenticity”. We talk about spiritual narcissism, and the current popularity of stoicism. We discuss how we obsess with self-improvement, and ignore collective solutions and political and socioeconomic issues. We talk about self-improvement as a means of class signaling, and “technosolutionist” approaches involving transhumanism and self-enhancement. We discuss how neoliberalism ties to self-improvement ideas, and how obsessed we are with being productive. Finally, we talk about how we can reframe self-improvement, and a positive role for technology. -- A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, HANS FREDRIK SUNDE, BERNARDO SEIXAS, OLAF ALEX, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, JOHN CONNORS, FILIP FORS CONNOLLY, DAN DEMETRIOU, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, SIMON COLUMBUS, PHIL KAVANAGH, MIKKEL STORMYR, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, ADANER USMANI, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, EDWARD HALL, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, DANIEL FRIEDMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ANTON ERIKSSON, CHARLES MOREY, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, STARRY, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, IGOR N, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, CHRIS STORY, KIMBERLY JOHNSON, BENJAMIN GELBART, JESSICA NOWICKI, LINDA BRANDIN, NIKLAS CARLSSON, ISMAËL BENSLIMANE, GEORGE CHORIATIS, VALENTIN STEINMANN, PER KRAULIS, KATE VON GOELER, ALEXANDER HUBBARD, LIAM DUNAWAY, BR, MASOUD ALIMOHAMMADI, AND PURPENDICULAR! A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, AL NICK ORTIZ, AND NICK GOLDEN! AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, BOGDAN KANIVETS, AND VEGA G!

Power Lunch Live
Rhett Power with Garry Kasparov on Power Lunch Live

Power Lunch Live

Play Episode Listen Later Sep 13, 2021 30:31


Garry Kasparov became the under-18 chess champion of the USSR at the age of 12 and the world under-20 champion at 17. He came to international fame at the age of 22 as the youngest world chess champion in history in 1985. He defended his title five times, including a legendary series of matches against arch-rival Anatoly Karpov. Kasparov broke Bobby Fischer's rating record in 1990. His famous matches against the IBM super-computer Deep Blue in 1996-97 were key to bringing artificial intelligence, and chess, into the mainstream. Kasparov has been a contributing editor to The Wall Street Journal since 1991 and is a regular commentator on politics and human rights. He speaks frequently to business and political audiences around the world on technology, strategy, politics, and achieving peak mental performance. He is a Senior Visiting Fellow at the Oxford-Martin School with a focus on human-machine collaboration. He's a member of the executive advisory board of the Foundation for Responsible Robotics and a Security Ambassador for Avast Software, where he discusses cybersecurity and the digital future. Kasparov's book How Life Imitates Chess on strategy and decision-making is available in over 20 languages. He is the author of two acclaimed series of chess books, My Great Predecessors and Modern Chess. Kasparov's 2015 book, Winter Is Coming: Why Vladimir Putin and the Enemies of the Free World Must Be Stopped is a blend of history, memoir, and current events analysis. Kasparov's next book is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. (May 2017) It details his matches against Deep Blue, his years of research and lectures on human and machine competition and collaboration, and his cooperation with the Future of Humanity Institute at the University of Oxford. He says,“AI will transform everything we do and we must press forward ambitiously in the one area robots cannot compete with humans: in dreaming big dreams. Our machines will help us achieve them. Instead of worrying about what machines can do, we should worry more about what they still cannot do.”

Future Positive
How to Develop Accountable AI Solutions

Future Positive

Play Episode Listen Later Nov 21, 2020 36:07


Welcome back to another edition of Future Positive, a podcast from XPRIZE. We convene the world’s brightest minds, across a kaleidoscope of cultures and points of view, revealing their inspirations, and how and why they will change the world.Since the foundation of the AI for Good movement, there have been many great examples of how AI can help solve the world’s problems to address the UN’s Sustainability Development Goals. But, AI technology is being developed in such a manner that its governance is at stake, and the signatories of the SDGs have pledged to Leave No One Behind -- Raising many questions around accountability. How do we develop accountable AI solutions? Does regulation stifle innovation? How can we ensure exponential technology is also ethical? This episode, featuring robot ethicist Aimee van Wynsberghe and Council of Europe’s Director of Information Society Jan Kleijssen, will explore data ethics as a resource to understand risk and how to establish a legal framework for AI.Aimee van Wynsberghe has been working in ICT and robotics since 2004. She began her career as part of a research team investigating the network variables related to surgical robots in Canada at the CSTAR (Canadian Surgical Technologies and Advance Robotics) Institute. She is Assistant Professor in Ethics and Technology and the TU Delft. She is co-founder and co-director of the Foundation for Responsible Robotics and on the board of the Institute for Accountability in a digital age and the Dutch National Alliance on AI (ALLAI). She serves as a member of the European Commission high level expert group on AI. Aimee has been named one of the Netherlands top 400 influential women under 38 by VIVA, was named one of the 25 ‘women in robotics you need to know about’, and is the 2018 winner of the L'oreal/UNESCO 'Women in Science'. She is author of the book Healthcare Robots: Ethics, Design, and Implementation and has been awarded an NWO personal research grant to study how we can responsibly design service robots. She has been interviewed by BBC, Quartz, Financial Times, and other International news media on the topic of ethics and robots, and is invited to speak at International conferences and summits.Jan Kleijssen was born in 1958 in Almelo, The Netherlands. He studied International Law at Utrecht State University (LLM in 1981) and International Affairs at the Norman Paterson School of International Affairs, Carleton University, Ottawa (MA 1982). He then did his military service as a Sub-Lieutenant in the Royal Netherlands Navy. Jan joined the Council of Europe in 1983 as a Lawyer with the European Commission of Human Rights. In 1987, he was appointed to the Secretariat of the Parliamentary Assembly and was Secretary to the Political Affairs Committee from 1990 to 1999. Jan then served as the Director of the Secretary General's Private Office until 2004 and subsequently as Director in the Parliamentary Assembly and Special Advisor to the President. In 2006, he moved to the Directorate General of Human Rights and was Director of Standard-setting until 2011 when he was appointed to his current function of Director of Information Society - Action against Crime, Directorate General Human Rights and Rule of Law.https://xprize.org See acast.com/privacy for privacy and opt-out information.

Philosophical Disquisitions
77 - Should AI be Explainable?

Philosophical Disquisitions

Play Episode Listen Later Jul 20, 2020


If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show NotesTopic covered include: Why do people worry about the opacity of AI?What's the difference between explainability and transparency?What's the moral value or function of explainable AI?Must we distinguish between the ethical value of an explanation and its epistemic value?Why is it so technically difficult to make AI explainable?Will we ever have a technical solution to the explanation problem?Why does Scott think there is Catch 22 involved in insisting on explainable AI?When should we insist on explanations and when are they unnecessary?Should we insist on using boring AI?  Relevant LinksScotts's webpageScott's paper "A Misdirected Principle with a Catch: Explicability for AI"Scott's paper "The Value of Transparency: Bulk Data and Authorisation""The Right to an Explanation Explained" by Margot KaminskiEpisode 36 - Wachter on Algorithms and Explanations  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Authors Love Readers, a Weekly Conversation on Writing
BookBub for Readers, with Audrey Derobert

Authors Love Readers, a Weekly Conversation on Writing

Play Episode Listen Later Feb 12, 2020 28:16


Audrey Derobert works on the Partners Team at BookBub, a free service that helps millions of readers discover limited-time ebook deals. Members receive a personalized daily email alerting them to the best free and deeply discounted titles matching their interests. Audrey is an Account Manager, supporting independent authors and small publishers who are using BookBub's tools as part of their marketing strategy. BookBub now has an audiobook retail website, Chirp, and is offering deals on titles there, too, including classic books as well as newer titles.  Audrey invites you to check out the BookBub Partners Blog here. Authors can subscribe to get weekly book marketing ideas, publishing insights, and BookBub tips delivered to your inbox. A graduate of Northeastern University in Boston, Audrey previously has worked with the Foundation for Responsible Robotics in the Netherlands, and as a legislative intern for Massachusetts state Rep. Bill Driscoll. In her own words: "It's such a low stakes way of trying out a new author. So if you're not sure, or if you're just curious about learning more about what that author has to offer, that's a really easy way to try out one of their books." (10:22) Thanks to DialogMusik for the instrumentals that accompany this podcast.  Thank you so much for listening. We hope you enjoyed the podcast enough to want to support us for future episodes. You can do that with as little as $1 a month by pledging at Patreon. It’s vital to Authors Love Readers to have your support. Thank you! Please also consider rating/reviewing the podcast wherever you listen to podcasts.

MIND & MACHINE: Future Tech + Futurist Ideas + Futurism
Tech & Artificial Intelligence Ethics with Silicon Valley Ethicist Shannon Vallor

MIND & MACHINE: Future Tech + Futurist Ideas + Futurism

Play Episode Listen Later Nov 20, 2018 57:28


My guest today is Shannon Vallor, a technology and A.I. Ethicist. I was introduced to Shannon by Karina Montilla Edmonds at Google Cloud AI — we did an episode with Karina a few ago months focused on Google's A.I. efforts. Shannon works with the Google Cloud AI team on a regular basis helping them shape and frame difficult issues in the development of this emerging technology.   Shannon is a Philosophy Professor specializing in the Philosophy of Science & Technology at Santa Clara University in Silicon Valley, where she teaches and conducts research on the ethics of artificial intelligence, robotics, digital media, surveillance, and biomedical enhancement. She is the author of the book 'Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting'.   Shanon is also Co-Director and Secretary of the Board for the Foundation for Responsible Robotics, and a Faculty Scholar with the Markkula Center for Applied Ethics at Santa Clara University.   We start out exploring the ethical issues surrounding our personal digital lives, social media and big data, then dive into the thorny ethics of artificial intelligence.   More on Shannon: Website - https://www.shannonvallor.net Tweitter - https://twitter.com/shannonvallor Markkula Center for Applied Ethics - https://www.scu.edu/ethics Foundation for Responsible Robotics - https://responsiblerobotics.org __________   More at: https://www.MindAndMachine.io

Philosophical Disquisitions
Episode #45 - Vallor on Virtue Ethics and Technology

Philosophical Disquisitions

Play Episode Listen Later Sep 18, 2018


 In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, sits on the Board of Directors of the Foundation for Responsible Robotics, and is a member of the IEEE Standards Association's Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. We talk about the problem of techno-social opacity and the value of virtue ethics in an era of rapid technological change. You can download the episode here or listen below. You can also subscribe to the podcast on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:39 - How students encouraged Shannon to write Technology and the Virtues6:30 - The problem of acute techno-moral opacity12:34 - Is this just the problem of morality in a time of accelerating change?17:16 - Why can't we use abstract moral principles to guide us in a time of rapid technological change? What's wrong with utilitarianism or Kantianism?23:40 - Making the case for technologically-sensitive virtue ethics27:27 - The analogy with education: teaching critical thinking skills vs providing students with information31:19 - Aren't most virtue ethical traditions too antiquated? Aren't they rooted in outdated historical contexts?37:54 - Doesn't virtue ethics assume a relatively fixed human nature? What if human nature is one of the things that is changed by technology?42:34 - Case study on Social Media: Defending Mark Zuckerberg46:54 - The Dark Side of Social Media52:48 - Are we trapped in an immoral equilibrium? How can we escape?57:17 - What would the virtuous person do right now? Would he/she delete Facebook?1:00:23 - Can we use technological to solve problems created by technology? Will this help to cultivate the virtues?1:05:00 - The virtue of self-regard and the problem of narcissism in a digital age  Relevant LinksShannon's HomepageShannon's profile at Santa Clara UniversityShannon's Twitter profileTechnology and the Virtues (Now in Paperback!) - by Shannon'Social Networking Technology and the Virtues' by Shannon'Moral Deskilling and Upskilling in a New Machine Age' by Shannon'The Moral Problem of Accelerating Change' by John Danaher  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

AI with AI
AI with AI: Debater of the AI-ncients, Part 1 (Dota)

AI with AI

Play Episode Listen Later Jul 6, 2018 36:24


In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consumes 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.

The AI Network Podcast
Ep.50: Aimee Wynesberghe, Foundation for Responsible Robotics

The AI Network Podcast

Play Episode Listen Later Jun 8, 2018 40:43


In an effort to uncover potential areas of innovation within the industry, Aimee Wynesberghe joins us to discuss ethics and governance in AI & Intelligent Automation. She's the President and Co-Founder of The Foundation for Responsible Robotics and we discuss how good ethics and governance can result in good business.

Tech's Message: News & Analysis With Nate Lanxon (Bloomberg, Wired, CNET)
You See Me Battling the Killer Robots (With Prof. Noel Sharkey): TM 111 REGULAR VERSION

Tech's Message: News & Analysis With Nate Lanxon (Bloomberg, Wired, CNET)

Play Episode Listen Later Nov 5, 2017 53:42


Please support us on Patreon at www.patreon.com/uktech for access to our exclusive extended version of the show, weekly columns from Nate, and much more.This week on the regular version of TECH'S MESSAGE Nate and Ian discuss:- Apple Watch in Surprisingly Strong Demand at U.K. Carrier EE- Vodafone's paid zero-rating Passes are now available- Google's Pixel Failure Sees Customers Getting A Totally Unusable Phone- Sainsbury’s bets on the vinyl revival with its own record labelSPECIAL FEATURE: Interview with Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics at the University of Sheffield. Noel, who is co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control, joins the show to discuss his work campaigning for awareness on the problems we face by lethal autonomous weapons systems. A fascinating discussion from the man who also appears as a judge on the BBC's Robot Wars. An interview not to miss! Follow Noel on Twitter.Patreon supporters have access to our longer version of the show, which includes the above as well as additional discussions about:- EXTRA STORY: Apple And Google Give People A Fright By Sorting Their Naughty Photos- Lengthy discussion about the ups and downs of image recognition systems in phones- Long talk about Google, Apple and Opera software fanboys - and our history with them- £9,000 HDMI cables? Not on our watch. We explain why this is an example of silly tech- Outtakes and more!Access our exclusive content and support Nate and Ian's podcasting by becoming a Patron at www.patreon.com/uktech. See acast.com/privacy for privacy and opt-out information.

DE INTERVIEW PODCAST VOOR ONDERNEMEND NEDERLAND | 7DTV
Aimee van Wynsberghe (Foundation for Responsible Robotics): ‘Je hoort vaak dat ethiek innovatie vertraagt, maar niet als je het op de goede manier inzet’

DE INTERVIEW PODCAST VOOR ONDERNEMEND NEDERLAND | 7DTV

Play Episode Listen Later Jul 19, 2017


‘Robots are coming.’ Dát weten we allemaal. Maar kunnen robots ook op ethische wijze handelen? Verantwoordelijkheid nemen? 7 Ditches sprak met een expert op dit gebied: Aimee van Wynsberghe, co-director foundation for responsible robotics. Surgical roboticaTijdens haar studietijd raakte Aimee in de ban van robotica. De eerste robot waar ze mee in aanraking kwam was tijdens haar bijbaan bij Canadian Surgical Technologies & Advanced Robotics (CSTAR). Samen met een team testte Aimee The da Vinci Surgical System voor robotic surgery. Aimee: ‘Tijdens deze introductie met surgical robots wist ik het: this is it.’ Een robot opereert nu nog samen met een arts. Maar is het mogelijk om een robot zelfstandig een operatie uit te laten voeren? ‘Daarin wordt de ethische kant van robotic surgery belicht. Het gaat er niet om of dat mogelijk is, het gaat er om of we dat wel moeten willen en waarom wel of niet. Want wie is er verantwoordelijk als er iets misgaat? De dienstdoende arts, het bedrijf dat de robot maakt, de ontwerper of het telecommunicatiebedrijf?’ Seks met een robotDe Foundation for Responsible Robotics (FRR) heeft een onderzoek gepresenteerd over onze seksuele toekomst met robots. Ook hier is de hamvraag: ‘Is het ok als men in de toekomst seks heeft met robots? Moeten we dit wel willen? Waarom moeten we dit willen?’ Deze aflevering gaat dieper in op onze relatie met robots: nu en in de toekomst. Een must-watch!

De #1 Podcast voor ondernemers | 7DTV | Ronnie Overgoor in gesprek met inspirerende ondernemers
Aimee van Wynsberghe (Foundation for Responsible Robotics): ‘Je hoort vaak dat ethiek innovatie vertraagt, maar niet als je het op de goede manier inzet'

De #1 Podcast voor ondernemers | 7DTV | Ronnie Overgoor in gesprek met inspirerende ondernemers

Play Episode Listen Later Jul 19, 2017 23:29


‘Robots are coming.' Dát weten we allemaal. Maar kunnen robots ook op ethische wijze handelen? Verantwoordelijkheid nemen? 7 Ditches sprak met een expert op dit gebied: Aimee van Wynsberghe, co-director foundation for responsible robotics. Surgical roboticaTijdens haar studietijd raakte Aimee in de ban van robotica. De eerste robot waar ze mee in aanraking kwam was tijdens haar bijbaan bij Canadian Surgical Technologies & Advanced Robotics (CSTAR). Samen met een team testte Aimee The da Vinci Surgical System voor robotic surgery. Aimee: ‘Tijdens deze introductie met surgical robots wist ik het: this is it.' Een robot opereert nu nog samen met een arts. Maar is het mogelijk om een robot zelfstandig een operatie uit te laten voeren? ‘Daarin wordt de ethische kant van robotic surgery belicht. Het gaat er niet om of dat mogelijk is, het gaat er om of we dat wel moeten willen en waarom wel of niet. Want wie is er verantwoordelijk als er iets misgaat? De dienstdoende arts, het bedrijf dat de robot maakt, de ontwerper of het telecommunicatiebedrijf?' Seks met een robotDe Foundation for Responsible Robotics (FRR) heeft een onderzoek gepresenteerd over onze seksuele toekomst met robots. Ook hier is de hamvraag: ‘Is het ok als men in de toekomst seks heeft met robots? Moeten we dit wel willen? Waarom moeten we dit willen?' Deze aflevering gaat dieper in op onze relatie met robots: nu en in de toekomst. Een must-watch!

De #1 Podcast voor ondernemers | 7DTV | Ronnie Overgoor in gesprek met inspirerende ondernemers
Aimee van Wynsberghe (Foundation for Responsible Robotics): ‘Je hoort vaak dat ethiek innovatie vertraagt, maar niet als je het op de goede manier inzet'

De #1 Podcast voor ondernemers | 7DTV | Ronnie Overgoor in gesprek met inspirerende ondernemers

Play Episode Listen Later Jul 19, 2017 23:29


‘Robots are coming.' Dát weten we allemaal. Maar kunnen robots ook op ethische wijze handelen? Verantwoordelijkheid nemen? 7 Ditches sprak met een expert op dit gebied: Aimee van Wynsberghe, co-director foundation for responsible robotics. Surgical roboticaTijdens haar studietijd raakte Aimee in de ban van robotica. De eerste robot waar ze mee in aanraking kwam was tijdens haar bijbaan bij Canadian Surgical Technologies & Advanced Robotics (CSTAR). Samen met een team testte Aimee The da Vinci Surgical System voor robotic surgery. Aimee: ‘Tijdens deze introductie met surgical robots wist ik het: this is it.' Een robot opereert nu nog samen met een arts. Maar is het mogelijk om een robot zelfstandig een operatie uit te laten voeren? ‘Daarin wordt de ethische kant van robotic surgery belicht. Het gaat er niet om of dat mogelijk is, het gaat er om of we dat wel moeten willen en waarom wel of niet. Want wie is er verantwoordelijk als er iets misgaat? De dienstdoende arts, het bedrijf dat de robot maakt, de ontwerper of het telecommunicatiebedrijf?' Seks met een robotDe Foundation for Responsible Robotics (FRR) heeft een onderzoek gepresenteerd over onze seksuele toekomst met robots. Ook hier is de hamvraag: ‘Is het ok als men in de toekomst seks heeft met robots? Moeten we dit wel willen? Waarom moeten we dit willen?' Deze aflevering gaat dieper in op onze relatie met robots: nu en in de toekomst. Een must-watch!

Building a Future with Robots
Humans Making Responsible Decisions

Building a Future with Robots

Play Episode Listen Later Jun 27, 2017 7:38


We’ve talked about the importance of designing a robot to make responsible decisions, but what about making responsible decisions when designing robots?In this video, Professor Noel Sharkey talks about some important ethical considerations for developing autonomous robots. Noel is Emeritus Professor of Robotics and Artificial Intelligence at The University of Sheffield, co-founder of the International Committee for Robot Arms Control and co-founder of Responsible Robotics.