Search for episodes from Philosophical Disquisitions with a specific topic:

Latest episodes from Philosophical Disquisitions

TITE 10 - Bonus Episode: Audience Q and A

Play Episode Listen Later Dec 20, 2023


In this episode, John and Sven answer questions from podcast listeners. Topics covered include: the relationships between animal ethics and AI ethics; religion and philosophy of tech; the analytic-continental divide; the debate about short vs long-term risks; getting engineers to take ethics seriously and much much more. Thanks to everyone that submitted a question. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 9 - Human-Technology Futures

Play Episode Listen Later Dec 20, 2023


What does the future hold for humanity's relationship with technology? Will we become ever more integrated with and dependent on technology? What are the normative and axiological consequences of this? In this episode, Sven and John discuss these questions and reflect, more generally, on technology, ethics and the value of speculation about the future. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Mark Coeckelbergh The Political Philosophy of AI David Chalmers Reality+ #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 8 - Machines as Colleagues, Friends and Lovers

Play Episode Listen Later Dec 20, 2023


In this episode, Sven and John talk about relationships with machines. Can you collaborate with a machine? Can robots be friends, colleagues or, perhaps, even lovers? These are common tropes in science fiction and popular culture, but is there any credibility to them? What would the ethical status of such relationships be? Should they be welcomed or avoided? These are just some of the questions addressed in this episode. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Evans, Robbins and Bryson - 'Do we collaborate with what we design?' Helen Ryland - 'It's Friendship Jim But Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 7 - Can Machines be Moral Patients?

Play Episode Listen Later Dec 19, 2023


In this episode Sven and John discuss the moral status of machines, particularly humanoid robots. Could machines ever be more than mere things? Some people see this debate as a distraction from the important ethical questions pertaining to technology; others take it more seriously. Sven and John share their thoughts on this topic and give some guidance as to how to think about the nature of moral status and its significance. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading David Gunkel, Person, Thing, Robot Butlin, Long et al 'Consciousness in AI: Insights from the Science of Consciousness' Summary of the above paper from Daily Nous. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 6 - Moral Agency in Machines

Play Episode Listen Later Dec 19, 2023


In this episode, Sven and John discuss the controversy arising from the idea moral agency in machines. What is an agent? What is a moral agent? Is it possible to create a machine with a sense of moral agency? Is this desirable or to be avoided at all costs? These are just some of the questions up for debate. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Amanda Sharkey, 'Can we program or train robots to be good?' Paul Formosa and Malcolm Ryan, 'Making Moral Machines: Why we need artificial moral agents?' Michael Anderson and Susan Leigh Anderson, 'Machine ethics: creating an ethical intelligent agent' Carissa Véliz, 'Moral zombies: why algorithms are not moral agents' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 5 - Technology and Responsibility Gaps

Play Episode Listen Later Dec 19, 2023


In this episode Sven and John discuss the thorny topic of responsibility gaps and technology. Over the past two decades, a small cottage industry of legal and philosophical research has arisen in relation to the idea that increasingly autonomous machines create gaps in responsibility. But what does this mean? Is it a serious ethical/legal problem? How can it be resolved? All this and more is explored in this episode. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Robert Sparrow 'Killer Robots' Alexander Hevelke and Julian Nida-Rümelin, "Responsibility for Crashes of Autonomous Vehicles" Andreas Matthias 'The Responsibility Gap: Ascribing Responsibility for the Actions Learning Automata' Jack Stilgoe 'Who's Driving Innovation', Chapter 1 Discount To get a discounted copy of Sven's book, click here and use the code ‘TEC20' to get 20% off the regular price. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 4 - Behaviour Change and Control

Play Episode Listen Later Dec 19, 2023


In this episode, John and Sven talk about the role that technology can play in changing our behaviour. In doing so, they note the long and troubled history of philosophy and self-help. They also ponder whether we can use technology to control our lives or whether technology controls us. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services.   Recommendations Brett Frischmann and Evan Selinger, Reengineering Humanity. Carissa Véliz, Privacy is Power. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 3 - Value Alignment and the Control Problem

Play Episode Listen Later Oct 10, 2023


In this episode, John and Sven discuss risk and technology ethics. They focus, in particular, on the perennially popular and widely discussed problems of value alignment (how to get technology to align with our values) and control (making sure technology doesn't do something terrible). They start the conversation with the famous case study of Stanislov Petrov and the prevention of nuclear war. You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommendations for further reading Atoosa Kasirzadeh and Iason Gabriel, 'In Conversation with AI: Aligning Language Models with Human Values' Nick Bostrom, relevant chapters from Superintelligence Stuart Russell, Human Compatible Langdon Winner, 'Do Artifacts Have Politics?' Iason Gabriel, 'Artificial Intelligence, Values and Alignment' Brian Christian, The Alignment Problem Discount You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

TITE 2: The Methods of Technology Ethics

Play Episode Listen Later Sep 29, 2023


In this episode, John and Sven discuss the methods of technology ethics. What exactly is it that technology ethicists do? How can they answer the core questions about the value of technology and our moral response to it? Should they consult their intuitions? Run experiments? Use formal theories? The possible answers to these questions are considered with a specific case study on the ethics of self-driving cars. You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Peter Königs 'Of Trolleys and Self-Driving Cars:What machine ethicists can and cannot learn from trolleyology' John Harris 'The Immoral Machine' Edmond Awad et al 'The Moral Machine Experiment' Discount You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

New Podcast Series - 'This is Technology Ethics'

Play Episode Listen Later Sep 25, 2023


I am very excited to announce the launch of a new podcast series with my longtime friend and collaborator Sven Nyholm. The podcast is intended to introduce key themes, concepts, arguments and ideas arising from the ethics of technology. It roughly follows the structure from the book This is Technology Ethics by Sven , but in a loose and conversational style. In the nine episodes, we will cover the nature of technology and ethics, the methods of technology ethics, the problems of control, responsibility, agency and behaviour change that are central to many contemporary debates about the ethics of technology. We will also cover perennially popular topics such as whether a machine could have moral status, whether a robot could (or should) be a friend, lover or work colleague, and the desirability of merging with machines. The podcast is intended to be accessible to a wide audience and could provide an ideal companion to an introductory or advanced course in the ethics of technology (with particular focus on AI, robotics and other digital technologies). I will be releasing the podcast on the Philosophical Disquisitions podcast feed, but I have also created an independent podcast feed and website, if you are just interested in it. The first episode can be downloaded here or you can listen below. You can also subscribe on Apple, Spotify, Amazon and a range of other podcasting services. If you go the website or subscribe via the standalone feed, you can download the first two episodes now. There is also a promotional tie with the book publisher. If you use the code 'TEC20' on the publisher's website (here) you can get 20% off the regular price.  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

110 - Can we pause AI Development? Evidence from the history of technological restraint

Play Episode Listen Later Jun 6, 2023


In this episode, I chat to Matthijs Maas about pausing AI development. Matthijs is currently a Senior Research Fellow at the Legal Priorities Project and a Research Affiliate at the Centre for the Study of Existential Risk at the University of Cambridge. In our conversation, we focus on the possibility of slowing down or limiting the development of technology. Many people are sceptical of this possibility but Matthijs has been doing some extensive research of historical case studies of, apparently successful, technological slowdown. We discuss these case studies in some detail. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksRecording of Matthijs's Chalmers about this topic: https://www.youtube.com/watch?v=vn4ADfyrJ0Y&t=2s Slides from this talk -- https://drive.google.com/file/d/1J9RW49IgSAnaBHr3-lJG9ZOi8ZsOuEhi/view?usp=share_linkPrevious essay / primer, laying out the basics of the argument:  https://verfassungsblog.de/paths-untaken/Incomplete longlist database of candidate case studies: https://airtable.com/shrVHVYqGnmAyEGsz #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

109 - How Can We Align Language Models like GPT with Human Values?

Play Episode Listen Later May 30, 2023


In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksAtoosa's webpageAtoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning Language Models with Human Values' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

108 - Miles Brundage (Head of Policy Research at Open AI) on the speed of AI development and the risks and opportunities of GPT

Play Episode Listen Later May 3, 2023


[UPDATED WITH CORRECT EPISODE LINK]In this episode I chat to Miles Brundage. Miles leads the policy research team at Open AI. Unsurprisingly, we talk a lot about GPT and generative AI. Our conservation covers the risks that arise from their use, their speed of development, how they should be regulated, the harms they may cause and the opportunities they create. We also talk a bit about what it is like working at OpenAI and why Miles made the transition from academia to industry (sort of). Lots of useful insight in this episode from someone at the coalface of AI development. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

107 - Will Large Language Models disrupt healthcare?

Play Episode Listen Later Apr 19, 2023


In this episode of the podcast I chat to Jess Morley. Jess is currently a DPhil candidate at the Oxford Internet Institute. Her research focuses on the use of data in healthcare, oftentimes on the impact of big data and AI, but, as she puts it herself, usually on 'less whizzy' things. Sadly, our conversation focuses on the whizzy things, in particular the recent hype about large language models and their potential to disrupt the way in which healthcare is managed and delivered. Jess is sceptical about the immediate potential for disruption but thinks it is worth exploring, carefully, the use of this technology in healthcare. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksJess's WebsiteJess on TwitterJohn Snow's cholera map  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

106 - Why GPT and other LLMs (probably) aren't sentient

Play Episode Listen Later Apr 11, 2023


In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Relevant LinksRobert's webpageRobert's substack Subscribe to the newsletter

105 - GPT: Higher Education's Jurassic Park Moment?

Play Episode Listen Later Apr 2, 2023


In this episode of the podcast, I talk to Thore Husfeldt about the impact of GPT on education. Thore is a Professor of Computer Science at the IT University of Copehagen, where he specialises in pretty technical algorithm-related research. He is also affiliated with Lund University in Sweden. Beyond his technical work, Thore is interested in ideas at the intersection of computer science, philosophy and educational theory. In our conversation, Thore outlines four models of what a university education is for, and considers how GPT disrupts these models. We then talk, in particular, about the 'signalling' theory of higher education and how technologies like GPT undercut the value of certain signals, and thereby undercut some forms of assessment. Since I am an educator, I really enjoyed this conversation, but I firmly believe there is much food for thought in it for everyone. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

104 - What will be the economic impact of GPT?

Play Episode Listen Later Mar 28, 2023


In this episode of the podcast, I chat to Anton Korinek about the economic impacts of GPT. Anton is a Professor of Economics at the University of Virginia and the Economics Lead at the Centre for AI Governance. He has researched widely on the topic of automation and labour markets. We talk about whether GPT will substitute for or complement human workers; the disruptive impact of GPT on the economic organisation; the jobs/roles most immediately at risk; the impact of GPT on wage levels; the skills needed to survive in an AI-enhanced economy, and much more.You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksAnton's homepageAnton's paper outlining 25 uses of LLMs for academic economistsAnton's dialogue with GPT, Claude and the economic David Autor #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

103 - GPT: How worried should we be?

Play Episode Listen Later Mar 23, 2023


In this episode of the podcast, I chat to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology in Sweden. We talk about GPT and LLMs more generally. What are they? Are they intelligent? What risks do they pose or presage? Are we proceeding with the development of this technology in a reckless way? We try to answer all these questions, and more. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

102 - Fictional Dualism and Social Robots

Play Episode Listen Later Dec 16, 2022


How should we conceive of social robots? Some sceptics think they are little more than tools and should be treated as such. Some are more bullish on their potential to attain full moral status. Is there some middle ground? In this episode, I talk to Paula Sweeney about this possibility. Paula defends a position she calls 'fictional dualism' about social robots. This allows us to relate to social robots in creative, human-like ways, without necessarily ascribing them moral status or rights. Paula is a philosopher based in the University of Aberdeen, Scotland. She has a background in the philosophy of language (which we talk about a bit) but has recently turned her attentio n to applied ethics of technology. She is currently writing a book about social robots. You download the episode here, or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services. Relevant LinksA Fictional Dualism Model of Social Robots by PaulaTrusting Social Robots by PaulaWhy Indirect Harms do Not Support Social Robot Rights by Paula #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

101 - Pistols, Pills, Pork and Ploughs: How Technology Changes Morality

Play Episode Listen Later Nov 28, 2022


It's clear that human social morality has gone through significant changes in the past. But why? What caused these changes? In this episode, I chat to Jeroen Hopster from the University of Utrecht about this topic. We focus, in particular, on a recent paper that Jeroen co-authored with a number of colleagues about four historical episodes of moral change and what we can learn from them. That paper, from which I take the title of this podcast, was called 'Pistols, Pills, Pork and Ploughs' and, as you might imagine, looks at how specific technologies (pistols, pills, pork and ploughs) have played a key role in catalysing moral change. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

100 - The Past and Future of Transhumanism

Play Episode Listen Later Nov 22, 2022


In this episode (which by happenstance is the 100th official episode - although I have released more than that) I chat to Elise Bohan. Elise is a senior research scholar at the Future of Humanity Institute in Oxford University. She has a PhD in macrohistory ("big" history) and has written the first book-length history of the transhumanist movement. She has also, recently, published the book Future Superhuman, which is a guide to transhumanist ideas and arguments. We talk about this book in some detail, and cover some of its more controversial claims. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

99 - Trusting Untrustworthy Machines and Other Psychological Quirks

Play Episode Listen Later Nov 7, 2022


In this episode I chat to Matthias Uhl. Matthias is a professor of the social and ethical implications of AI at the Technische Hochschule Ingolstadt. Matthias is a behavioural scientist that has been doing a lot of work on human-AI/Robot interaction. He focuses, in particular, on applying some of the insights and methodologies of behavioural economics to these questions. We talk about three recent studies he and his collaborators have run revealing interesting quirks in how humans relate to AI decision-making systems. In particular, his findings suggesting that people do outsource responsibility to machines, are willing to trust untrustworthy machines and prefer the messy discretion of human decision-makers over the precise logic of machines. Matthias's research is fascinating and has some important implications for people working in AI ethics and policy. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Relevant LinksMatthias's Faculty Page'Hiding Behind Machines: Artificial Agents May Help to Evade Punishment' by Matthias and colleagues'Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions' by Matthias and colleagues'People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency' by Matthias and colleagues Subscribe to the newsletter

Ethics of Academia (12) - Olle Häggström

Play Episode Listen Later Sep 20, 2022


In this episode (the last in this series for the time being) I chat to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology in Sweden. Having spent the first half of his academic life focuses largely on pure mathematical research, Olle has shifted focus in recent years to consider how research can benefit humanity and how some research might be too risky to pursue. We have a detailed conversation about the ethics of research and contrast different ideals of what it means to be a scientist in the modern age. Lots of great food for thought in this one. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (11) - Jessica Flanigan

Play Episode Listen Later Sep 13, 2022


In this episode I chat to Jessica Flanigan. Jessica is a Professor of Leadership Ethics at the University of Richmond, where she is also the Richard L Morrill Chair in Ethics & Democratic Values. We talk about the value of philosophical research, whether philosophers should emulate Socrates, and how to create good critical discussions in the classroom. I particularly enjoyed hearing Jessica's ideas about effective teaching and I think everyone can learn something from them. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (10) - Jesse Stommel

Play Episode Listen Later Sep 6, 2022


Is grading unethical? Coercive and competitive? Should we replace grading with something else? In this podcast I chat to Jesse Stommel, one of the foremost proponents of 'ungrading'. Jesse is a faculty member of the writing program at the University of Denver and is the founder of the Hybrid Pedagogy journal. We talk about the problem with traditional grading systems, the idea of ungrading, and how to create communities of respect in the classroom. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (9) - Jason Brennan

Play Episode Listen Later Aug 26, 2022


In this episode I talk to Jason Brennan. Jason is a Professor of Strategy, Economics, Ethics, and Public Policy at the McDonough School of Business at Georgetown University. He is a prolific and productive scholar, having published over 20 books and 70 articles in the past decade or so. His research focuses on the intersections between politics, economics and philosophy. He has written quite a bit about the moral failures and conundrums of higher education, which makes him an ideal guest for this podcast. We talk about the purpose of research, the ethics of productivity, the problem with PhD programmes and the plight of adjuncts. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.  Subscribe to the newsletter

Ethics of Academia (8) - Zena Hitz

Play Episode Listen Later Aug 17, 2022


In this episode I chat to Zena Hitz. Zena is currently a tutor at St John's College. She is a classicist and author of the book Lost in Thought. We have wide-ranging conversation about losing faith in academia, the dubious value of scholarship, the importance of learning, and the risks inherent in teaching. I learned a lot talking to Zena and found her perspective on the role of academics and educators to be enlightening. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (7) - Aaron Rabinowitz

Play Episode Listen Later Jul 25, 2022


In this episode I chat to Aaron Rabinowitz. Aaron is a veteran podcaster and philosopher. He hosts the Embrace the Void and Philosophers in Space podcasts. He is currently doing a PhD in the philosophy of education at Rutgers University. Aaron is particularly interested in the problem of moral luck and how it should affect our approach to education. This was a fun conversation. Stay tuned for the Schopenhauer thought experiment around the 40 minute mark! You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (6) - Helen de Cruz

Play Episode Listen Later Jul 20, 2022


In this episode I chat to Helen de Cruz. Helen is the Danforth Chair in Humanities at the University of St. Louis. Helen has a diverse set of interests and outputs. Her research focuses on the philosophy of belief formation, but she also does a lot of professional and public outreach, writes science fiction, and plays the lute. If that wasn't impressive enough, she is also a very talented illustrator/artist, as can be seen from her book Philosophy Illustrated. We have a wide-ranging conversation about the ethics of research, teaching, public outreach and professional courtesy. Some of the particular highlights from the conversation are her thoughts on prestige bias in academia and the crisis of peer reviewing. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (5) - Brian Earp

Play Episode Listen Later Jul 12, 2022


In this episode I chat to Brian Earp. Brian is a Senior Research Fellow with the Uehiro Centre for Practical Ethics in Oxford. He is a prolific researcher and writer in psychology and applied ethics. We talk a lot about how Brian ended up where he is, the value of applied research and the importance of connecting research to the real world. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (4) - Justin Weinberg

Play Episode Listen Later Jul 5, 2022


In this episode of the Ethics of Academia, I chat to Justin Weinberg, Associate Professor of Philosophy at University of South Carolina. Justin researches ethical and social philosophy, as well as metaphilosophy. He is also the editor of the popular Daily Nous blog and has, as a result, developed an interest in many of the moral dimensions of philosophical academia. As a result, our conversation traverses a wide territory, from the purpose of philosophical research to the ethics of grading. You can download the episode here or listen below. You can also subscribe on Apple, Spotify, Google or any other preferred podcasting service. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (3) - Regina Rini

Play Episode Listen Later Jun 28, 2022


In this episode I talk to Regina Rini, Canada Research Chair at York University in Toronto. Regina has a background in neuroscience and cognitive science but now works primarily in moral philosophy. She has the distinction of writing a lot of philosophy for the public through her columns for the Time Literary Supplement and the value of this becomes a major theme of our conversation.You can download the episode here or listen below. You can also subscribe on Apple, Spotify and other podcasting services. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Ethics of Academia (2) with Michael Cholbi

Play Episode Listen Later Jun 20, 2022


This is the second episode in my short series on The Ethics of Academia. In this episode I chat to Michael Cholbi, Professor of Philosophy at the University of Edinburgh. We reflect on the value of applied ethical research and the right approach to teaching. Michael has thought quite a lot about the ethics of work, in general, and the ethics of teaching and grading in particular. So those become central themes in our conversation. You can download the podcast here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

The Ethics of Academia Podcast (Episode 1 with Sven Nyholm)

Play Episode Listen Later Jun 15, 2022


I have been reflecting on the ethics of academic life for some time. I've written several articles about it over the years. These have focused on the ethics of grading, student-teacher relationships, academic career choice, and the value of teaching (among other things). I've only scratched the surface. It seems to me that academic life is replete with ethical dilemmas and challenges. Some systematic reflection on and discussion of those ethical challenges would seem desirable. Obviously, there is a fair bit of writing available on the topic but, as best I can tell, there is no podcast dedicated to it. So I decided to start one. I'm launching this podcast as both an addendum to my normal podcast (which deals primarily with the ethics of technology) and as an independent podcast in its own right. If you just want to subscribe to the Ethics of Academia, you can do so here (Apple and Spotify). (And if you do so, you'll get the added bonus of access to the first three episodes). I intend this to be a limited series but, if it proves popular, I might come back to it. In the first episode, I chat to Sven Nyholm (Utrecht University) about the ethics of research, teaching and administration. Sven is a longtime friend and collaborator. He has been one of my most frequent guests on my main podcast so he seemed like the ideal person to kickstart this series. Although we talk about a lot of different things, Sven draws particular attention to the ethical importance of the division of labour in academic life.You can download the episode here or listen below. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

98 - The Psychology of Human-Robot Interactions

Play Episode Listen Later Jun 9, 2022


How easily do we anthropomorphise robots? Do we see them as moral agents or, even, moral patients? Can we dehumanise them? These are some of the questions addressed in this episode with my guests, Dennis Küster and Aleksandra Świderska. Dennis is a postdoctoral researcher at the University of Bremen. Aleksandra is a senior researcher at the University of Warsaw. They have worked together on a number of studies about how humans perceive and respond to robots. We discuss several of their joint studies in this episode. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDennis's webpageAleksandra's webpage'I saw it on YouTube! How online videos shape perceptions of mind, morality, and fears about robots' by Dennis, Aleksandra and David Gunkel'Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism' by Aleksandra and Dennis'Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes' by Dennis and Aleksandra #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

97 - The Perils of Predictive Policing (& Automated Decision-Making)

Play Episode Listen Later Apr 5, 2022


One particularly important social institution is the police force, who are increasingly using technological tools to help efficiently and effectively deploy policing resources. I've covered criticisms of these tools in the past, but in this episode, my guest Daniel Susser has some novel perspectives to share on this topic, as well as some broader reflections on how humans can relate to machines in social decision-making. This one was a lot of fun and covered a lot of ground. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDaniel's HomepageDaniel on Twitter'Predictive Policing and the Ethics of Preemption' by Daniel'Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making' by Daniel (and Kiel Brennan-Marquez and Karen Levy) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

96 - How Does Technology Mediate Our Morals?

Play Episode Listen Later Dec 1, 2021


It is common to think that technology is morally neutral. “Guns don't kill people; people kill people' - as the typical gun lobby argument goes. But is this really the right way to think about technology? Could it be that technology is not so neutral as we might suppose? These are questions I explore today with my guest Olya Kudina. Olya is an ethicist of technology focusing on the dynamic interaction between values and technologies. Currently, she is an Assistant Professor at Delft University of Technology. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksOlya's HomepageOlya on TwitterThe technological mediation of morality: value dynamism, and the complex interaction between ethics and technology - Olya's PhD Thesis'Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy' by Olya and Peter Paul Verbeek"Alexa, who am I?”: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making - by Olya'Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap' by Philip Nickel, Olya Kudina and Ibo van den Poel #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

95 - The Psychology of the Moral Circle

Play Episode Listen Later Nov 10, 2021


I was raised in the tradition of believing that everyone is of equal moral worth. But when I scrutinise my daily practices, I don't think I can honestly say that I act as if everyone is of equal moral worth. The idea that some people belong within the circle of moral concern and some do not is central to many moral systems. But what affects the dynamics of the moral circle? How does it contract and expand? Can it expand indefinitely? In this episode I discuss these questions with Joshua Rottman. Josh is an associate Professor in the Department of Psychology and the Program in Scientific and Philosophical Studies of Mind at Franklin and Marshall College. His research is situated at the intersection of cognitive development and moral psychology, and he primarily focuses on studying the factors that lead certain entities and objects to be attributed with (or stripped of) moral concern. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show NotesTopics discussed include:The normative significance of moral psychologyThe concept of the moral circleHow the moral circle develops in childrenHow the moral circle changes over timeCan the moral circle expand indefinitely?Do we have a limited budget of moral concern?Do most people underuse their budget of moral concern?Why do some people prioritise the non-human world over marginal humans?Relevant LinksJosh's webpage at F and M CollegeJosh's personal webpageJosh at Psychology Today'Tree huggers vs Human Lovers' by Josh et alSummary of the above article at Psychology Today'Towards a Psychology of Moral Expansiveness' by Crimston et al #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

94 - Robot Friendship and Hatred

Play Episode Listen Later Nov 1, 2021


Can we move beyond the Aristotelian account of friendship when thinking about our relationships with robots? Can we hate robots? In this episode, I talk to Helen Ryland about these topics. Helen is a UK-based philosopher. She completed her PhD in Philosophy in 2020 at the University of Birmingham. She now works as an Associate Lecturer for The Open University. Her work examines human-robot relationships, video game ethics, and the personhood and moral status of marginal cases of human rights (e.g., subjects with dementia, nonhuman animals, and robots). You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show NotesTopics covered include:What is friendship and why does it matter?The Aristotelian account of friendshipLimitations of the Aristotelian accountMoving beyond AristotleThe degrees of friendship modelWhy we can be friends with robotsCriticisms of robot-human friendshipThe possibility of hating robotsDo we already hate robots?Why would it matter if we did hate robots?Relevant LinksHelen's homepage'It's Friendship Jim, But Not as We Know It:  A Degrees-of-Friendship View of Human–Robot Friendships' by HelenCould you hate a robot? Does it matter if you could? by Helen #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

93 - Will machines impede moral progress?

Play Episode Listen Later Jul 19, 2021


Thomas Sinclair (left), Ben Kenward (right)Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show NotesTopics discussed incude:What is a moral value?What is a moral machine?What is moral progress?Has society progress, morally speaking, in the past?How can we design moral machines?What's the problem with getting machines to follow our current moral consensus?Will people over-defer to machines? Will they outsource their moral reasoning to machines?Why is a lack of moral progress such a problem right now?Relevant LinksThomas's webpageBen's webpage'Machine morality, moral progress and the looming environmental disaster' by Ben and Tom

92 - The Ethics of Virtual Worlds

Play Episode Listen Later Jul 9, 2021


Are virtual worlds free from the ethical rules of ordinary life? Do they generate their own ethical codes? How do gamers and game designers address these issues? These are the questions that I explore in this episode with my guest Lucy Amelia Sparrow. Lucy is a PhD Candidate in Human-Computer Interaction at the University of Melbourne. Her research focuses on ethics and multiplayer digital games, with other interests in virtual reality and hybrid boardgames. Lucy is a tutor in game design and an academic editor, and has held a number of research and teaching positions at universities across Hong Kong and Australia. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here) Show NotesTopics discussed include:Are virtual worlds amoral? Do we value them for their freedom from ordinary moral rules?Is there an important distinction between virtual reality and games?Do games generate their own internal ethics?How prevalent are unwanted digitally enacted sexual interactions?How do gamers respond to such interactions? Do they take them seriously?How can game designers address this problem?Do gamers tolerate immoral actions more than the norm?Can there be a productive form of distrust in video game design?Relevant LinksLucy on TwitterLucy on Researchgate'Apathetic villagers and the trolls who love them' by Lucy Sparrow, Martin Gibbs and Michael Arnold'From ‘Silly' to ‘Scumbag': Reddit Discussion of a Case of Groping in a Virtual Reality Game' by Lucy et al'Productive Distrust: Playing with the player in digital games' by Lucy et al'The "digital animal intuition": the ethics of violence against animals in video games" by Simon Coghlan and Lucy Sparrow #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

91 - Rights for Robots, Animals and Nature?

Play Episode Listen Later Jun 30, 2021


Should robots have rights? How about chimpanzees? Or rivers? Many people ask these questions individually, but few people have asked them all together at the same time. In this episode, I talk to a man who has. Josh Gellers is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, a Fulbright Scholar to Sri Lanka, a Research Fellow of the Earth System Governance Project, and Core Team Member of the Global Network for Human Rights and the Environment. His research focuses on environmental politics, rights, and technology. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017) and Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020). We talk about the arguments and ideas in the latter book. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).Show notesTopics covered include:Should we even be talking about robot rights?What is a right? What's the difference between a legal and moral right?How do we justify the ascription of rights?What is personhood? Who counts as a person?Properties versus relations - what matters more when it comes to moral status?What can we learn from the animal rights case law?What can we learn from the Rights of Nature debate?Can we imagine a future in which robots have rights? What kinds of rights might those be?Relevant LinksJosh's homepageJosh on TwitterRights for Robots: Artificial Intelligence, Animal and Environmental Law  by Josh (digital version available Open Access)"Earth system law and the legal status of non-humans in the Anthropocene" by Josh #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

90 - The Future of Identity

Play Episode Listen Later Apr 28, 2021


What does it mean to be human? What does it mean to be you? Philosophers, psychologists and sociologists all seem to agree that your identity is central to how you think of yourself and how you engage with others. But how are emerging technologies changing how we enact and constitute our identities? That's the subject matter of this podcast with Tracey Follows. Tracy is a professional futurist. She runs a consultancy firm called Futuremade. She is a regular writer and speaker on futurism. She has appeared on the BBC and is a contributing columnist with Forbes. She is also a member of the Association of Professional Futuriss and the World Futures Studies Federation. We talk about her book The Future of You: Can your identity survive the 21st Century?You can download the podcast here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show NotesTopics covered in this episode include:The nature of identityThe link between technology and identityIs technology giving us more creative control over identity?Does technology encourage more conformity and groupthink?Is our identity being fragmented by technology?Who controls the technology of identity formation?How should we govern the technology of identity formation in the future?Relevant LinksThe Future of You by TraceyTracey on TwitterTracey at ForbesFuturemade consultancyTracey's talk to the London Futurists #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

89 - Is Morality All About Cooperation?

Play Episode Listen Later Mar 26, 2021


What are the origins and dynamics of human morality? Is morality, at root, an attempt to solve basic problems of cooperation? What implications does this have for the future? In this episode, I chat to Dr Oliver Scott Curry about these questions. We discuss, in particular, his theory of morality as cooperation (MAC). Dr Curry is Research Director for Kindlab, at kindness.org. He is also a Research Affiliate at the School of Anthropology and Museum Ethnography, University of Oxford, and a Research Associate at the Centre for Philosophy of Natural and Social Science, at the London School of Economics. He received his PhD from LSE in 2005. Oliver’s academic research investigates the nature, content and structure of human morality. He tackles such questions as: What is morality? How did morality evolve? What psychological mechanisms underpin moral judgments? How are moral values best measured? And how does morality vary across cultures? To answer these questions, he employs a range of techniques from philosophy, experimental and social psychology and comparative anthropology. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Show NotesTopics discussed include:The nature of moralityThe link between human morality and cooperationThe seven types of cooperation How these seven types of cooperation generate distinctive moral normsThe evidence for the theory of morality as cooperationIs the theory underinclusive, reductive and universalist? Is that a problem?Is the theory overinclusive? Could it be falsified?Why Morality as Cooperation is better than Moral Foundations TheoryThe future of cooperationRelevant linksOliver's webpageOliver on TwitterOliver's Podcast - The Map'Morality as Cooperation: A Problem-Centred Approach' by Oliver (sets out the theory of MAC)'Morality is fundamentally an evolved solution to problems of social co-operation' (debate at the Royal Anthropological Society)'Moral Molecules: Morality as a combinatorial system' by Oliver and his colleagues'Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies' by Oliver and colleagues'What is wrong with moral foundations theory?' by Oliver Subscribe to the newsletter

88 - The Ethics of Social Credit Systems

Play Episode Listen Later Feb 26, 2021


Should we use technology to surveil, rate and punish/reward all citizens in a state? Do we do it anyway? In this episode I discuss these questions with Wessel Reijers, focusing in particular on the lessons we can learn from the Chinese Social Credit System. Wessel is a postdoctoral Research Associate at the European University Institute, working in the ERC project “BlockchainGov”, which looks into the legal and ethical impacts of distributed governance. His research focuses on the philosophy and ethics of technology, notably on the development of a critical hermeneutical approach to technology and the investigation of the role of emerging technologies in the shaping of citizenship in the 21st century. He completed his PhD at the Dublin City University with a Dissertation entitled “Practising Narrative Virtue Ethics of Technology in Research and Innovation”. In addition to a range of peer-reviewed articles, he recently published the book Narrative and Technology Ethics with Palgrave, which he co-authored with Mark Coeckbelbergh. You can download the episode here or listen below.You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show NotesTopics discussed in this episode includeThe Origins of the Chinese Social Credit SystemHistorical Parallels to the SystemSocial Credit Systems in Western CulturesIs China exceptional when it comes to the use of these systems?The impact of social credit systems on human values such as freedom and authenticityHow the social credit system is reshaping citizenshipThe possible futures of social credit systemsRelevant LinksWessel's homepageWessel on Twitter'A Dystopian Future? The Rise of Social Credit Systems' - a written debate featuring Wessel'How to Make the Perfect Citizen? Lessons from China's Model of Social Credit System' by Liav Orgad and Wessel ReijersNarrative and Technology Ethics by Wessel Reijers and Mark Coeckelbergh #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

87 - AI and the Value Alignment Problem

Play Episode Listen Later Dec 23, 2020


How do we make sure that an AI does the right thing? How could we do this when we ourselves don't even agree on what the right thing might be? In this episode, I talk to Iason Gabriel about these questions. Iason is a political theorist and ethicist currently working as a Research Scientist at DeepMind. His research focuses on the moral questions raised by artificial intelligence. His recent work addresses the challenge of value alignment, responsible innovation, and human rights. He has also been a prominent contributor to the debate about the ethics of effective altruism.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes:Topics discussed include:What is the value alignment problem?Why is it so important that we get value alignment right?Different ways of conceiving the problemHow different AI architectures affect the problemWhy there can be no purely technical solution to the value alignment problemSix potential solutions to the value alignment problemWhy we need to deal with value pluralism and uncertaintyHow political theory can help to resolve the problem Relevant LinksIason on Twitter"Artificial Intelligence, Values and Alignment" by Iason"Effective Altruism and its Critics" by IasonMy blog series on the above article"Social Choice Ethics in Artificial Intelligence" by Seth Baum #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

86 - Are Video Games Immoral?

Play Episode Listen Later Dec 15, 2020


Have you ever played Hitman? Grand Theft Auto? Call of Duty? Did you ever question the moral propriety of what you did in those games? In this episode I talk to Sebastian Ostritsch about the ethics of video games. Sebastian is an Assistant Prof. (well, technically, he is a Wissenschaftlicher mitarbeiter but it's like an Assistant Prof) of Philosophy based at Stuttgart University in Germany. He has the rare distinction of being both an expert in Hegel and the ethics of computer games. He is the author of Hegel: Der Welt-Philosoph (published this year in German) and is currently running a project, funded by the German research body DFG, on the ethics of computer games.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).Show NotesTopics discussed include:The nature of video gamesThe problem of seemingly immoral video game contentThe amorality thesis: the view that playing video games is morally neutralDefences of the amorality thesis: it's not real and it's just a game.Problems with the 'it's not real' and 'it's just a game' arguments.The Gamer's Dilemma: Why do people seem to accept virtual murder but not, say, virtual paedophilia?Resolving the gamer's dilemmaThe endorsement view of video game morality: some video games might be immoral if they endorse an immoral worldviewHow these ideas apply to other forms of fictional media, e.g. books and movies.Relevant LinksSebastian's homepage (in German)Sebastian's book Hegel: Der Weltphilosoph'The amoralist's challenge to gaming and the gamer's moral obligation' by Sebastian'The immorality of computer games: Defending the endorsement view against Young’s objections' by Sebastian and Samuel UlbrichtThe Gamer's Dilemma by Morgan LuckHomo Ludens by Johan Huizinga #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

85 - The Internet and the Tyranny of Perceived Opinion

Play Episode Listen Later Oct 27, 2020


 Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in the Faculty of Business, Languages and Social Science at Østfold University College in Norway. He has a particular interest in political theory and philosophy, and has worked extensively on Thomas Hobbes and social contract theory, environmental ethics and game theory. At the moment his work focuses mainly on issues involving the dynamics between human individuals, society and technology. You download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show NotesTopics discussed include: Selective Exposure and Confirmation Bias How algorithms curate our informational ecology Filter Bubbles Echo Chambers How the internet is created more internally conformist but externally polarised groups The nature of political freedom Tocqueville and the tyranny of the majority Mill and the importance of individuality How algorithmic curation of speech is undermining our liberty What can be done about this problem?Relevant Links Henrik's faculty homepage Henrik on Researchgate Henrik on Twitter 'The Tyranny of Perceived Opinion: Freedom and information in the era of big data' by Henrik 'Privacy as an aggregate public good' by Henrik 'Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data' by Henrik 'When nudge comes to shove: Liberty and nudging in the era of big data' by Henrik #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

84 - Social Media, COVID-19 and Value Change

Play Episode Listen Later Oct 20, 2020


Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert (PhD) about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft. His research focuses on the philosophy of technology, ethics of technology, emotions, and aesthetics. He has published papers on roboethics, art and technology, and philosophy of science. In his previous research he also explored philosophical issues related to humor and amusement.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include: What is a value?Descriptive vs normative theories of valuePsychological theories of personal valuesThe nature of emotionsThe connection between emotions and valuesEmotional contagionEmotional climates vs emotional atmospheresThe role of social media in causing emotional contagionIs the coronavirus promoting a negative emotional climate?Will this affect our political preferences and policies?General lessons for technology and value change Relevant Links Steffen's HomepageThe Designing for Changing Values Project @ TU DelftCorona and Value Change by Steffen'Unleashing the Constructive Potential of Emotions' by Steffen and Sabine RoeserAn Overview of the Schwartz Theory of Basic Personal Values #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

83 - Privacy is Power

Play Episode Listen Later Oct 10, 2020


Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a Tutorial Fellow at Hertford College Oxford. She works on privacy, technology, moral and political philosophy and public policy. She has also been a guest on this podcast on two previous occasions. Today, we’ll be talking about her recently published book Privacy is Power. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show Notes Topics discussed in this show include: The most surprising examples of digital surveillanceThe nature of privacyIs privacy dead?Privacy as an intrinsic and instrumental valueThe relationship between privacy and autonomyDoes surveillance help with security and health?The problem with mass surveillanceThe phenomenon of toxic dataHow surveillance undermines democracy and freedomAre we willing to trade privacy for convenient services?And much more Relevant Links Carissa's WebpagePrivacy is Power by CarissaSummary of Privacy is Power in AeonReview of Privacy is Power in The Guardian Carissa's Twitter feed (a treasure trove of links about privacy and surveillance)Views on Privacy: A Survey by Sian Brooke and Carissa VélizData, Privacy and the Individual by Carissa Véliz #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

82 - What should we do about facial recognition technology?

Play Episode Listen Later Sep 23, 2020


 Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about this issue. Brenda is Senior Counsel and Director of Artificial Intelligence and Ethics at Future of Privacy Forum. She manages the FPF portfolio on biometrics, particularly facial recognition. She authored the FPF Privacy Expert’s Guide to AI, and co-authored the paper, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models.” Prior to working at FPF, Brenda served in the U.S. Air Force. You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show notesTopics discussed include: What is facial recognition anyway? Are there multiple forms that are confused and conflated? What's the history of facial recognition? What has changed recently? How is the technology used? What are the benefits of facial recognition? What's bad about it? What are the privacy and other risks? Is there something unique about the face that should make us more worried about facial biometrics when compared to other forms? What can we do to address the risks? Should we regulate or ban?Relevant Links Brenda's Homepage Brenda on Twitter 'The Privacy Expert's Guide to AI and Machine Learning' by Brenda (at FPF) Brenda's US Congress Testimony on Facial Recognition 'Facial recognition and the future of privacy: I always feel like … somebody’s watching me' by Brenda 'The Case for Banning Law Enforcement From Using Facial Recognition Technology' by Evan Selinger and Woodrow Hartzog #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Claim Philosophical Disquisitions

In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

Claim Cancel