Coming to you from C4E, the Centre for Ethics, University of Toronto, and its Ethics of AI Lab ("Where Conversations About Ethics Happen"): conversations about ... ethics! Another podcast worth checking out: C4eRadio: Sounds of Ethics, our regularly updated collection of dozens of C4E lectures.
Centre for Ethics, University of Toronto
Matthew Gourlay, a criminal lawyer at Henein Hutchison LLP (Toronto), discusses the tough (and not-so-tough) ethical issues criminal defense lawyers face in the legal system and beyond.
Ida Koivisto, a law professor at the University of Helsinki, discusses her exciting work at the intersection of legal and social theory, administrative law, and the ethics of artificial intelligence, which critically analyzes the frequent calls for transparency in machine learning and automated decision making.
Suzanne van Geuns joins the Let's Get Ethical podcast to discuss her research into seduction forums and TheRedPill subreddit. She explains how the anti-feminist internet serves to educate men and women on their true nature. Examining these instructional guides as a scholar of religion, Geuns sheds light on some of the darkest parts of the internet.
In the second part of this two-part episode, Dr. Sunit Das focuses on a N.Y. Times article by a fellow neurosurgeon (Dr. Joseph Stern of Greensboro, N.C.) entitled "Moral Distress in Neurosurgery" to closely examine the unique ethical dilemmas encountered by neurosurgeons, who often interact with patients at their most vulnerable, while being tasked with making the most difficult decisions about their patients' lives. Dr. Das stresses the need for physicians in general, and neurosurgeons in particular, to seek to understand their patients' values and ethical outlooks, by asking "if time were short, what would be important to you?"
In the first half of this two-part episode, neurosurgeon and neuroscientist Sunit Das talks about the ethical dimensions of medicine as a profession and a vocation, and the particular challenges regularly facing neurosurgeons, their patients, and their patients' families in life-and-death situations.
Machine learning systems are implemented by all the big tech companies in everything from ad auctions to photo-tagging, and are supplementing or replacing human decision making in a host of more mundane, but possibly more consequential, areas like loans, bail, policing, and hiring. And we’ve already seen plenty of dangerous failures; from risk assessment tools systematically rating black arrestees as riskier than white ones, to hiring algorithms that learned to reject women. There’s a broad consensus across industry, academe, government, and civil society that there is a problem here, one that presents a deep challenge to core democratic values, but there is much debate over what kind of problem it is and how it might be solved. Taking a sociological approach to the current boom in ethical AI and machine learning initiatives that promise to save us from the machines, this talk explores how this problem becomes a problem, for whom, and with what solutions.
Teresa Heffernan on why the humanities, the centuries old study of human society and culture that relies on facts and evidence, should not be swallowed up by the recent trend toward a different type of knowledge generated by algorithms, big data and machines, and why it is imperative to keep the tension between these fields alive.
Media literacy is more urgent than ever in our day, as is the need for deeper forms of cultural and technological literacy. These are the real font of freedom and democracy, not any cozy relationship between Zuckerbergian bromides and anti-regulatory government feebleness.