POPULARITY
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/academic-life
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening! Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines. Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time. To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves. In The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford UP, 2024), Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Professor Vallor calls us to rethink what AI is and can be, and what we want to be with it. Our guest is: Professor Shannon Vallor, who is the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh, where she directs the Centre for Technomoral Futures in the Edinburgh Futures Institute. She is a standing member of Stanford's One Hundred Year Study of Artificial Intelligence (AI100) and member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking; and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting; and is the editor of The Oxford Handbook of Philosophy of Technology. She serves as advisor to government and industry bodies on responsible AI and data ethics, and is Principal Investigator and Co-Director of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council. Our host is: Dr. Christina Gessler, who is the creator and producer of the Academic Life podcast. Listeners may enjoy this playlist: More Than A Glitch Artificial Unintelligence: How Computers Misunderstand the World Welcome to Academic Life, the podcast for your academic journey—and beyond! You can support the show by downloading and sharing episodes. Join us again to learn from more experts inside and outside the academy, and around the world. Missed any of the 250+ Academic Life episodes? Find them here. And thank you for listening!
What if we saw Artificial Intelligence as a mirror rather than as a form of intelligence?That's the subject of a fabulous new book by Professor Shannon Vallor, who is my guest on this episode.In our discussion, we explore how artificial intelligence reflects not only our technological prowess but also our ethical choices, biases, and the collective values that shape our world.We also discuss how AI systems mirror our societal flaws, raising critical questions about accountability, transparency, and the role of ethics in AI development. Shannon helps me to examine the risks and opportunities presented by AI, particularly in the context of decision-making, privacy, and the potential for AI to influence societal norms and behaviours. This episode offers a thought-provoking exploration of the intersection between technology and ethics, urging us to consider how we can steer AI development in a direction that aligns with our shared values. Guest Biography Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy. She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices. Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy. She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024). AI Generated Timestamped Summary of Key Points:00:02:30: Introduction to Professor Shannon Vallor and her work.00:06:15: Discussion on AI as a mirror of societal values.00:10:45: The ethical implications of AI decision-making. 00:18:20: How AI reflects human biases and the importance of transparency.00:25:50: The role of ethics in AI development and deployment.00:33:10: Challenges of integrating AI into human-centred contexts.00:41:30: The potential for AI to shape societal norms and behaviours. 00:50:15: Professor Vallor's insights on the future of AI and ethics.00:58:00: Closing thoughts and reflections on AI's impact on humanity.LinksTo find out more about Shannon and her work visit her website: https://www.shannonvallor.net/ The AI Mirror: https://global.oup.com/academic/product/the-ai-mirror-9780197759066?A Noema essay by Shannon on the dangers of AI: https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/ A New Yorker feature on the book https://www.newyorker.com/culture/open-questions/in-the-age-of-ai-what-makes-people-unique The AI Mirror as one of the FT's technology books of the summer https://www.ft.com/content/77914d8e-9959-4f97-98b0-aba5dffd581c The FT review of The AI Mirror: https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011 For more on the Edinburgh Futures Institute: https://efi.ed.ac.uk/
In Episode 26 we will do a deep dive into Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting by Shannon Vallor. This book is an expansion on the ideas she developed in her article "Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character" that we explored back in Episode 7. This book argues the importance of virtue ethics as a framework to live the good life, then establishes a set of technomoral virtues to help us live the good life in today's technology-saturated world. Follow on Twitter: https://twitter.com/kendallgilesJoin to support the show and for exclusive content, including episode notes, scripts, and other writings: https://patreon.com/kendallgiles
In this episode, guests Svenja O'Donnell and Shannon Vallor discuss data biases, the danger of apathy, saucepan-enabled smuggling and more.Svenja is a writer, journalist, commentator and freelance adviser on Brexit, and UK and EU Politics. She previously worked as a foreign, economics and politics correspondent at Bloomberg News and Businessweek. Her articles have appeared in various publications, including the Financial Times, Sunday Times and Independent. She was awarded the Washington National Press Club Breaking News Prize in 2017 for her coverage of the Brexit referendum and her first book, ‘Inge’s War: A Story of Family, Secrets and Survival under Hitler’ was published in August 2020.Shannon is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI), and Professor in Philosophy.Prior to joining the University, Shannon was a Visiting Researcher and AI Ethicist at Google. Her research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices. Her first book, ‘Technology and Virtues: A Philosophical Guide to a Future Worth Wanting’ was published in 2016, and she is currently working on ‘The AI Mirror: Rebuilding Humanity in an Age of Machine Thinking’. Each episode of Sharing things is a conversation between two members of our university community. It could be a student, a member of staff or a graduate, the only thing they have in common at the beginning is Edinburgh. We start with an object. A special, treasured or significant item that we have asked each guest to bring to the conversation. What happens next is sometimes funny, sometimes moving and always unexpected. Find out more at www.ed.ac.uk/sharing-things-podcast
The 21st century offers a dizzying array of new technological developments: robots smart enough to take white collar jobs, social media tools that manage our most important relationships, ordinary objects that track, record, analyze and share every detail of our daily lives, and biomedical techniques with the potential to transform and enhance human minds and bodies to an unprecedented degree. Emerging technologies are reshaping our habits, practices, institutions, cultures and environments in increasingly rapid, complex and unpredictable ways that create profound risks and opportunities for human flourishing on a global scale. How can our future be protected in such challenging and uncertain conditions? How can we possibly improve the chances that the human family will not only live, but live well, into the 21st century and beyond? Shannon Vallor's Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) locates a key to that future in the distant past: specifically, in the philosophical traditions of virtue ethics developed by classical thinkers from Aristotle and Confucius to the Buddha. Each developed a way of seeking the good life that equips human beings with the moral and intellectual character to flourish even in the most unpredictable, complex and unstable situations--precisely where we find ourselves today. Through an examination of the many risks and opportunities presented by rapidly changing technosocial conditions, Vallor makes the case that if we are to have any real hope of securing a future worth wanting, then we will need more than just better technologies. We will also need better humans. Technology and the Virtues develops a practical framework for seeking that goal by means of the deliberate cultivation of technomoral virtues: specific skills and strengths of character, adapted to the unique challenges of 21st century life, that offer the human family our best chance of learning to live wisely and well with emerging technologies. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices
The 21st century offers a dizzying array of new technological developments: robots smart enough to take white collar jobs, social media tools that manage our most important relationships, ordinary objects that track, record, analyze and share every detail of our daily lives, and biomedical techniques with the potential to transform and enhance human minds and bodies to an unprecedented degree. Emerging technologies are reshaping our habits, practices, institutions, cultures and environments in increasingly rapid, complex and unpredictable ways that create profound risks and opportunities for human flourishing on a global scale. How can our future be protected in such challenging and uncertain conditions? How can we possibly improve the chances that the human family will not only live, but live well, into the 21st century and beyond? Shannon Vallor's Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) locates a key to that future in the distant past: specifically, in the philosophical traditions of virtue ethics developed by classical thinkers from Aristotle and Confucius to the Buddha. Each developed a way of seeking the good life that equips human beings with the moral and intellectual character to flourish even in the most unpredictable, complex and unstable situations--precisely where we find ourselves today. Through an examination of the many risks and opportunities presented by rapidly changing technosocial conditions, Vallor makes the case that if we are to have any real hope of securing a future worth wanting, then we will need more than just better technologies. We will also need better humans. Technology and the Virtues develops a practical framework for seeking that goal by means of the deliberate cultivation of technomoral virtues: specific skills and strengths of character, adapted to the unique challenges of 21st century life, that offer the human family our best chance of learning to live wisely and well with emerging technologies. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices
The 21st century offers a dizzying array of new technological developments: robots smart enough to take white collar jobs, social media tools that manage our most important relationships, ordinary objects that track, record, analyze and share every detail of our daily lives, and biomedical techniques with the potential to transform and enhance human minds and bodies to an unprecedented degree. Emerging technologies are reshaping our habits, practices, institutions, cultures and environments in increasingly rapid, complex and unpredictable ways that create profound risks and opportunities for human flourishing on a global scale. How can our future be protected in such challenging and uncertain conditions? How can we possibly improve the chances that the human family will not only live, but live well, into the 21st century and beyond? Shannon Vallor's Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) locates a key to that future in the distant past: specifically, in the philosophical traditions of virtue ethics developed by classical thinkers from Aristotle and Confucius to the Buddha. Each developed a way of seeking the good life that equips human beings with the moral and intellectual character to flourish even in the most unpredictable, complex and unstable situations--precisely where we find ourselves today. Through an examination of the many risks and opportunities presented by rapidly changing technosocial conditions, Vallor makes the case that if we are to have any real hope of securing a future worth wanting, then we will need more than just better technologies. We will also need better humans. Technology and the Virtues develops a practical framework for seeking that goal by means of the deliberate cultivation of technomoral virtues: specific skills and strengths of character, adapted to the unique challenges of 21st century life, that offer the human family our best chance of learning to live wisely and well with emerging technologies. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices
The 21st century offers a dizzying array of new technological developments: robots smart enough to take white collar jobs, social media tools that manage our most important relationships, ordinary objects that track, record, analyze and share every detail of our daily lives, and biomedical techniques with the potential to transform and enhance human minds and bodies to an unprecedented degree. Emerging technologies are reshaping our habits, practices, institutions, cultures and environments in increasingly rapid, complex and unpredictable ways that create profound risks and opportunities for human flourishing on a global scale. How can our future be protected in such challenging and uncertain conditions? How can we possibly improve the chances that the human family will not only live, but live well, into the 21st century and beyond? Shannon Vallor's Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) locates a key to that future in the distant past: specifically, in the philosophical traditions of virtue ethics developed by classical thinkers from Aristotle and Confucius to the Buddha. Each developed a way of seeking the good life that equips human beings with the moral and intellectual character to flourish even in the most unpredictable, complex and unstable situations--precisely where we find ourselves today. Through an examination of the many risks and opportunities presented by rapidly changing technosocial conditions, Vallor makes the case that if we are to have any real hope of securing a future worth wanting, then we will need more than just better technologies. We will also need better humans. Technology and the Virtues develops a practical framework for seeking that goal by means of the deliberate cultivation of technomoral virtues: specific skills and strengths of character, adapted to the unique challenges of 21st century life, that offer the human family our best chance of learning to live wisely and well with emerging technologies. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts.
My guest today is Shannon Vallor, a technology and A.I. Ethicist. I was introduced to Shannon by Karina Montilla Edmonds at Google Cloud AI — we did an episode with Karina a few ago months focused on Google's A.I. efforts. Shannon works with the Google Cloud AI team on a regular basis helping them shape and frame difficult issues in the development of this emerging technology. Shannon is a Philosophy Professor specializing in the Philosophy of Science & Technology at Santa Clara University in Silicon Valley, where she teaches and conducts research on the ethics of artificial intelligence, robotics, digital media, surveillance, and biomedical enhancement. She is the author of the book 'Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting'. Shanon is also Co-Director and Secretary of the Board for the Foundation for Responsible Robotics, and a Faculty Scholar with the Markkula Center for Applied Ethics at Santa Clara University. We start out exploring the ethical issues surrounding our personal digital lives, social media and big data, then dive into the thorny ethics of artificial intelligence. More on Shannon: Website - https://www.shannonvallor.net Tweitter - https://twitter.com/shannonvallor Markkula Center for Applied Ethics - https://www.scu.edu/ethics Foundation for Responsible Robotics - https://responsiblerobotics.org __________ More at: https://www.MindAndMachine.io
Our 100th episode spectacular – with a look at where we have come from and where we are going. Show Notes It’s been four and a half years and 100 episodes of Winning Slowly! We pause to take a bit to reflect on what we’ve done, what we’re about, and where we hope to go from here. We also reflect on some of our craziest titles along the way. (“Buying Me Off With Warm Fuzzies”? “Juice Up the Weird Edges of the Ecosystem”? These got wild at times.) Links Cameron Morgan Vallor’s book: Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting My blog post about it: Good Arguments IBM Has a Watson Dilemma - WSJ Previous episodes I mean, look people: basically it’s just “go look at earlier seasons.” So… quick links to earlier seasons it is! Season 0 Season 1 Season 2 Season 3 Season 4 Season 5 Season 6 Music “No Haters” from New Life by The Midnight Sons, a.k.a. Stephen. It’s Creative Commons Attribution licensed, just like this website! “Winning Slowly Theme” by Chris Krycho. Sponsors Many thanks to the people who help us make this show possible by their financial support! This month’s sponsors: Kurt Klassen Jeremy W. Sherman If you’d like to support the show, you can make a pledge at Patreon or give directly via Square Cash. Respond We love to hear your thoughts. Hit us up via Twitter, Facebook, or email!
As you may have noticed (even in the barrage of election coverage), I've been silent since the end of July. The reason is rather simple: since July, I've taught five classes (Contracts, Intellectual Property Survey, two sections of Internet Law, and a new course (for me), Employment Discrimination Law). To do that well, along with being a present husband and father to my two young sons, and maintain forward motion with my scholarship, Hearsay Culture gives way. I don't like that effect, but its unavoidable so long as I continue to do the show gratis (which is not a complaint; its a reality). So, on this momentous and nerve-wracking Election Day afternoon, I'm pleased to post one new show, Show # 259, September 16, my interview with the amazing Prof. Shannon Vallor of Santa Clara University, author of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Shannon has written an exceptionally important and unique work focusing on what personal virtues should guide our integration of new technologies into society. Defining the contours of what she calls "technonormal virtues," Shannon calls on informed citizens to become "moral experts" in a collective effort to create "a future worth wanting" (or, even better, demand for "useful tools that do not debilitate us.") Because Shannon writes about philosophy and virtue as an applicable construct rather than an abstraction, her book should be required reading for anyone seeking better understanding of how we might achieve the best social and moral results from our technological advancements. I very much enjoyed the interview, and hope that you find it valuable and gripping. Indeed, with so much left to discuss, look for part two of the interview in December! {Hearsay Culture is a talk show on KZSU-FM, Stanford, 90.1 FM, hosted by Center for Internet & Society Resident Fellow David S. Levine. The show includes guests and focuses on the intersection of technology and society. How is our world impacted by the great technological changes taking place? Each week, a different sphere is explored. For more information, please go to http://hearsayculture.com.}