Ethics of AI in Context

Follow Ethics of AI in Context
Share on
Copy link to clipboard

A selection of interviews and talks exploring the normative dimensions of AI and related technologies in individual and public life, brought to you by the interdisciplinary Ethics of AI Lab at the Centre for Ethics, University of Toronto.


    • Jul 8, 2022 LATEST EPISODE
    • infrequent NEW EPISODES
    • 42m AVG DURATION
    • 61 EPISODES


    Search for episodes from Ethics of AI in Context with a specific topic:

    Latest episodes from Ethics of AI in Context

    Conference: Trust and the Ethics of AI

    Play Episode Listen Later Jul 8, 2022 202:46


    This workshop aims to address some of the insights that we have gained about the ethics of AI and the concept of trust. We critically explore practical and theoretical issues relating to values and frameworks, engaging with carebots, evaluations of decision support systems, and norms in the private sector. We assess the objects of trust in a democratic setting and discuss how scholars can further shift insights from academia to other sectors. Workshop proceedings will appear in a special symposium issue of C4eJournal.net. Speakers: Judith Simon (University of Hamburg), Can and Should We Trust AI? Vivek Nallur (University College Dublin), Trusting a Carebot: Towards a Framework for Asking the Right Questions Justin B. Biddle (Georgia Institute of Technology), Organizational Perspectives on Trust and Values in AI Sina Fazelpour (Northeastern University), Where Are the Missing Humans? Evaluating AI Decision Support Systems in Content Esther Keymolen (Tilburg University), Trustworthy Tech Companies: Talking the Talk or Walking the Walk? Ori Freiman (University of Toronto), Making Sense of the Conceptual Nonsense “Trustworthy AI”: What's Next?

    Conference: Afrofuturism And The Law

    Play Episode Listen Later May 19, 2022 75:52


    Long before the film Black Panther captured the public's imagination, the cultural critic Mark Dery had coined the term “Afrofuturism” to describe “speculative fiction that treats African-American themes and addresses African-American concerns in the context of twentieth-century technoculture.” Since then, the term has been applied to speculative creatives as diverse as the pop artist Janelle Monae, the science fiction writer Octavia Butler, and the visual artist Nick Cave. But only recently have thinkers turned to how Afrofuturism might guide, and shape, law. The participants in this workshop explore the many ways Afrofuturism can inform a range of legal issues, and even chart the way to a better future for us all. Introduction: Bennett Capers (Law, Fordham) Panel 1: Ngozi Okidegbe (Law, Cardozo), Of Afrofuturism, Of Algorithms Alex Zamalin (Political Science & African American Studies, Detroit Mercy), Afrofuturism as Reconstitution Panel 2: Rasheedah Phillips (PolicyLink), Race Against Time: Afrofuturism and Our Liberated Housing Futures Etienne C. Toussaint (Law, South Carolina), For Every Rat Killed

    Nathan Olmstead, We Are All Ghosts: Sidewalk Toronto

    Play Episode Listen Later Apr 13, 2022 30:35


    As the fabric of the city becomes increasingly fibreoptic, enthusiasm for the speed and ubiquity of digital infrastructure abounds. From Toronto to Abu Dhabi, new technologies promise the ability to observe, manage, and experience the city in so-called real-time, freeing cities from the spatiotemporal restrictions of the past. In this project, I look at the way this appreciation for the real-time is influencing our understanding of the datafied urban subject. I argue that this dominant discourse locates digital infrastructure within a broader metaphysics of presence, in which instantaneous data promise an unmediated view of both the city and those within it. The result is a levelling of residents along an overarching, linear, and spatialized timeline that sanitizes the temporal and rhythmic diversity of urban spaces. This same levelling effect can be seen in contemporary regulatory frameworks, which focus on the rights or sovereignty of a largely atomized urban subject removed from its spatiotemporal context. A more equitable alternative must therefore consider the temporal diversity, relationality, and inequality implicit within the datafied city, an alternative I begin to ground in Jacques Derrida's notion of the spectre. This work is conducted through an exploration of Sidewalk Labs pioneering use of term urban data during their foray in Toronto, which highlights the potentiality of alternative, spectral data governance models at the same time it reflects the limitations of existing frameworks. Nathan Olmstead Urban Studies University of Toronto

    Kamilah Ebrahim & Erina Moon, Building Algorithms that Work for Everyone

    Play Episode Listen Later Apr 1, 2022 19:52


    Oftentimes, the development of algorithms are divorced from the environments where they will eventually be deployed. In high stakes contexts, like child welfare services, policymakers and technologists must exercise a high degree of caution in the design and deployment of decisionmaking algorithms or risk further marginalising already vulnerable communities. This talk will seek to explain the status quo of child welfare algorithms, what we miss when we fail to include context in the development of algorithms, and how the addition of qualitative text data can help to make better algorithms. Kamilah Ebrahim iSchool University of Toronto Erina Moon iSchool University of Toronto

    Sharon Ferguson, Increasing Diversity In Machine Learning And Artificial Intelligence

    Play Episode Listen Later Mar 23, 2022 45:30


    Machine Learning and Artificial Intelligence are powering the applications we use, the decisions we make, and the decisions made about us. We have already seen numerous examples of what happens when these algorithms are designed without diversity in mind: facial recognition algorithms, recidivism algorithms, and resume reviewing algorithms all produce non-equitable outcomes. As Machine Learning (ML) and Artificial Intelligence (AI) expand into more areas of our lives, we must take action to promote diversity among those working in this field. A critical step in this work is understanding why some students who choose to study ML/AI later leave the field. In this talk, I will outline the findings from two iterations of survey-based studies that start to build a model of intentional persistence in the field. I will highlight the findings that suggest drivers of the gender gap, review what we've learned about persistence through these studies, and share open areas for future work. Sharon Ferguson Industrial Engineering University of Toronto

    Julian Posada, The Coloniality Of Data Work For Machine Learning

    Play Episode Listen Later Mar 16, 2022 47:06


    Many research and industry organizations outsource data generation, annotation, and algorithmic verification—or data work—to workers worldwide through digital platforms. A subset of the gig economy, these platforms consider workers independent users with no employment rights, pay them per task, and control them with automated algorithmic managers. This talk explores how the coloniality of data work is characterized by an extractivist method of generating data that privileges profit and the epistemic dominance of those in power. Social inequalities are reproduced through the data production process, and local worker communities mitigate these power imbalances by relying on family members, neighbours, and colleagues online. Furthermore, management in outsourced data production ensures that workers' voices are suppressed in the data annotation process through algorithmic control and surveillance, resulting in datasets generated exclusively by clients, with their worldviews encoded in algorithms through training. Julian Posada Faculty of Information University of Toronto

    Tom Yeh & Benjamin Walsh, Is AI Creepy Or Cool Teaching Teens About AI And Ethics

    Play Episode Listen Later Mar 16, 2022 58:12


    Teens have different attitudes toward AI. Some are excited by AI's promises to change their future. Some are afraid of AI's problems. Some are indifferent. There is a consensus among educators that AI is a “must-teach” topic for teens. But how? In this talk, we will share our experiences and lessons learned from the Imagine AI project, funded by the National Science Foundation and advised by the Center for Ethics (C4E). Unlike other efforts focusing on AI technologies, Imagine AI takes a unique approach by focusing on AI ethics. Since 2019, we have partnered with more than a dozen teachers to teach hundreds of students in different classrooms and schools about AI ethics. We tried a variety of pedagogies and tested a range of AI ethics topics to understand their relative effectiveness to educate and abilities to engage. We found promising opportunities, such as short stories, as well as tensions. Our short stories are original, centering on young protagonists, and contextualizing ethical dilemmas in scenarios relatable to teens. We will share what stories are more engaging than the others, how teachers are using the stories in classrooms, and how students are responding to the stories. Moreover, we will discuss the tensions we identified. For students, there is a tension of balance: how can we teach AI ethics without inducing a chilling effect? For teachers, there is a tension of authority: which teacher, a social study teacher well-versed in social issues, a science teacher skilled in modern technology, or an English literacy teacher experienced in discussing dilemmas and critical thinking, would be the most authoritative to teach about AI ethics? Another tension is urgency: while teachers agree AI ethics is an urgent topic because of AI's far-reaching influence on teens' future, they struggle to meet teens' even more urgent and immediate needs such as social-emotional issues worsened by the pandemic, interruption of education, loss of housing, and even school shootings. Is now really a good time to talk about AI ethics? But if not now, when? We will discuss the implications of these tensions and potential solutions. We will conclude with a call for action for experts on AI and ethics to partner with educators to help our future generations “imagine AI.” Tom Yeh Computer Science University of Colorado Benjamin Walsh Education University of Colorado

    Mishall Ahmed, Difference Centric Yet Difference Transcended

    Play Episode Listen Later Mar 4, 2022 30:55


    Developed along existing asymmetries of power, AI and its applications further entrench, if not exacerbate social, racialized, and gendered inequalities. As critical discourse grows, scholars make the case for the deployment of ethics and ethical frameworks to mitigate harms disproportionately impacting marginalized groups. However, there are foundational challenges to the actualization of harm reduction through a liberal ethics of AI. In this talk I will highlight the foundational challenges posed to goals of harm reduction through ethics frameworks and its reliance on social categories of difference. Mishall Ahmed Political Science York University

    From Human Rights To Digital Human Rights – A Proposed Typology

    Play Episode Listen Later Mar 4, 2022 38:15


    ‘The same rights that people have offline must also be protected online' is used in recent years as a dominant concept in international discourse about human rights in cyberspace. But does this notion of ‘normative equivalency' between the ‘offline' and the ‘online' afford effective protection for human rights in the digital age? The presentation reviews the development of human rights in cyberspace as they were conceptualized and articulated in international fora and critically evaluate the normative equivalency paradigm adopted by international bodies for the online application of human rights. It then attempts to describe the contours of a new digital human rights framework, which goes beyond the normative equivalency paradigm, and presents a typology of three ‘generations' or modalities in the evolution of digital human rights. In particular, we focus on the emergence of new digital human rights and present two prototype rights – the right to Internet access and the right not to be subject to automated decision – and discuss the normative justifications invoked for recognizing these new digital human rights. We propose that such a multilayered framework corresponds better than the normative equivalency paradigm to the unique features and challenges of upholding human rights in cyberspace. Dafna Dror-Shpoliansky Hebrew University Law Yuval Shany Hebrew University Law

    Wendy Wong, Data You And The Challenge For Data Rights (Ethics Of AI In Context)

    Play Episode Listen Later Feb 18, 2022 39:24


    Human rights are one of the major innovations of the 20th century. Their emergence after World War II and global uptake promised a new world of universalized humanity in which human dignity would be protected, and individuals would have agency and flourish. The proliferation of digital data (i.e. datafication) and its intertwining with our lives, coupled with the growth of AI, signals a fundamental shift in the human experience. To date, human rights have not yet grappled fully with the implications of datafication. Yet, they remain our best hope for ensuring human autonomy and dignity, if they can be rebooted to take into account the “stickiness” of data. The talk will discuss how international human rights are structured, introduce the notion of Data You, why Data You is here to stay, and how this affects notions of data rights. Wendy Wong Political Science University of Toronto

    Kelly Hannah-Moffat, Algorithmic Adaptability And Ethics Washing - Appropriating The Critique

    Play Episode Listen Later Feb 3, 2022 42:54


    The emergence of artificial intelligence (AI) and, more specifically, machine learning analytics fuelled by big data, is altering some legal and criminal justice practices. Harnessing the abilities of AI creates new possibilities, but it also risks reproducing the status quo and further entrenching existing inequalities. The potential of these technologies has simultaneously enthused and alarmed scholars, advocates, and practitioners, many of whom have drawn attention to the ethical concerns associated with the widespread use of these technologies. In the face of sustained critiques, some companies have rebranded, positioning their AI technologies as more ethical, transparent, or accountable. However even if a technology is defensibly ‘ethical,' its combination with pre-existing institutional logics and practices reinforces patterns of inequality. In this paper we focus on two examples, legal analytics and predictive policing, to explore how companies are mobilizing the language and logics of ethical algorithms to rebrand their technologies. We argue this rebranding is a form of ethics washing, which obfuscates the appropriateness and limitations of these technologies in particular contexts.

    Algorithmic Policing Policies Through A Human Rights And Substantive Equality Lens

    Play Episode Listen Later Jan 27, 2022 53:54


    Panelists: Kristen Thomasen, Suzie Dunn, & Kate Robertson This panel will discuss Citizen Lab and LEAF's collaborative submission to the Toronto Police Services Board's public consultation on its draft policy for AI use by the Toronto police with the three co-authors of the submission. The submission made 33 specific recommendations to the TPSB with a focus on substantive equality and human rights. The panelists will discuss some of those recommendations and the broader themes identified in the draft policy.

    Ori Freiman, The Ethics Of Central Bank Digital Currency

    Play Episode Listen Later Jan 20, 2022 43:17


    No one has any doubt that the future of the economic system is digital. Central banks worldwide worry that the rising popularity and adoption of cryptocurrencies and other means of payments, and new financial instruments, pose a risk to early fintech adopters and the economy at large. As an alternative, most central banks worldwide, led by the Bank of International Settlements, consider the issuance of a CBDC (Central Bank Digital Currency) – the digital form of a country's fiat money. A CBDC differs from existing cashless payment forms such as card payments and credit transfers: it represents a direct claim on a central bank rather than a financial obligation to an institution. The digital nature of the transactions, together with algorithms, AIs, and the vast amount of data that such a system produces can lead to many advantages: money supply, interest rates, and other features of the system, are expected to be automatically aligned with the monetary policy to achieve financial stability. In addition, tracking digital money routes reduces the ability to launder money, hide payments for illegal activities, and make it harder to evade taxes (and easier to accurately and automatically collect them). As with any promising technology, this digital manifestation of money has a dystopian side, too. In this presentation, I focus on identifying the ethical concerns and considerations – for individuals and the democratic society. I will describe how data from such a system can lead to unjust discrimination, how it enables surveillance in its utmost sense, how social developments are at risk of being stalled, and how such technology can encourage self-censorship and cast a shadow over the freedom of expression and association. I'll end with normative recommendations. System designers, developers, infrastructure builders, and regulators must involve civic organizations, public experts, and others to ensure the representation of diverse public interests. Inclusion and diversity are the first lines of defence against discrimination and biases in society, business, and technology.

    Pasquale & Malgieri, The New Turn On AI Accountability From The EU Regulation And Beyond

    Play Episode Listen Later Dec 3, 2021 28:29


    In the last years, legal scholars and computer scientists have discussed widely how to reach a good level of AI accountability and fairness. The first attempts focused on the right to an explanation of algorithms, but such an approach has proven often unfeasible and fallacious due to the lack of legal consensus on the existence of that right in different legislations, on the content of a satisfactory explanation and the technical limits of a satisfactory causal-based explanation for deep learning models. In the last years, several scholars have indeed shifted their attention from the legibility of the algorithms to the evaluation of the “impacts” of such autonomous systems on human beings, through “Algorithmic Impact Assessments” (AIA). This paper, building on the AIA frameworks, advances a policy-making proposal for a test to “justify” (rather than merely explaining) algorithms. In practical terms, this paper proposes a system of “unlawfulness by default” of AI systems, an ex-ante model where the AI developers have the burden of the proof to justify (on the basis of the outcome of their Algorithmic Impact Assessment) that their autonomous system is not discriminatory, not manipulative, not unfair, not inaccurate, not illegitimate in its legal bases and in its purposes, not using unnecessary amount of data, etc. In the EU, the GDPR and the new proposed AI Regulation already tend to a sustainable environment of desirable AI systems, which is broader than any ambition to have “transparent” AI or “explainable” AI, but it requires also “fair”, “lawful”, “accurate”, “purpose-specific”, data-minimalistic and “accountable” AI. This might be possible through a practical “justification” process and statement through which the data controller proves in practical terms the legality of an algorithm, i.e., the respect of all data protection principles (that in the GDPR are fairness, lawfulness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, accountability). This justificatory approach might also be a solution to many existing problems in the AI explanation debate: e.g., the difficulty to “open” black boxes, the transparency fallacy, the legal difficulties to enforce a right to receive individual explanations. Under a policy-making approach, this paper proposes a pre-approval model in which the Algorithms developers before launching their systems into the market should perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the algorithms is not illegitimate (and thus not unfair, not discriminatory, not inaccurate, etc.) The EU AI Regulation seems to go in this direction. It proposes a model of partial unlawfulness-by-default. However, it is still too lenient: the category of high-risk AI systems is too narrow (it excludes commercial manipulation leading to economic harms, emotion recognitions, general vulnerability exploitation, AI in the healthcare field, etc.) and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition.

    Kiel Brennan-Marquez, The (Non)Automatability Of Equity

    Play Episode Listen Later Nov 26, 2021 43:46


    We are in the midst of ongoing debate about whether, in principle, the enforcement of legal rules—and corresponding decisional processes—can be automated. Often neglected in this conversation is the role of equity, which has historically worked as a particularized constraint on legal decision-making. Certain kinds of equitable adjustments may be susceptible to automation—or at least, just as susceptible as legal rules themselves. But other kinds of equitable adjustments will not be, no matter how powerful machines become, because they require non-formalizable modes of judgment. This should give us pause about all efforts toward legal automation, because it is not clear—or even conceptually determinate—which kinds of legal decisions will end up, in practice, implicating non-automatable forms of equity. Kiel Brennan-Marquez University of Connecticut Associate Professor of Law Faculty Director of the Center on Community Safety, Policing and Inequality

    Mathew Iantorno, Automating Care, Manufacturing Crisis

    Play Episode Listen Later Oct 29, 2021 45:13


    Artificially intelligent agents that provide care for human beings are becoming an increasing reality globally. From disembodied therapists to robotic nurses, new technologies have been framed as a means of addressing intersecting labour shortages, demographic shifts, and economic shortfalls. However, as we race towards AI-focused solutions, we must scrutinize the challenges of automating care. This talk engages in a two-part reflection on these challenges. First, issues of building trust and rapport in such relationships will be examined through an extended case study of a chatbot intended to help individuals quit smoking. Second, the institutional rationale for favouring machine-focused solutions over human-focused ones will be questioned through the speaker's concept of crisis automation. Throughout, new equitable cybernetic relationships between those provisioning and receiving care will be platformed.

    Avery Slater, The Golem and the Game of Automation (Ethics of AI in Context)

    Play Episode Listen Later Oct 27, 2021 35:20


    Norbert Wiener, a foundational force in cybernetics and information theory, often used the allegory of the Golem to represent the ethical complexities inherent in machine learning. Recent advances in the field of reinforcement learning (RL) deal explicitly with problems laid out by Wiener's earlier writings, including the importance of games as learning environments for the development of AI agents. This talk explores issues from contemporary machine learning that express Wiener's prescient notion of developing a “significant game” between creator and machine.

    Abdi Aidid, Legal Prediction and Calcification Risk

    Play Episode Listen Later Oct 7, 2021 37:19


    The application of artificial intelligence (AI) to the law has enabled lawyers and judges to predict – with some accuracy – how future courts are likely to rule in new situations. Machine learning algorithms do this by synthesizing historical case law and applying that corpus of precedent to new factual scenarios. Early evidence suggests that these tools are enjoying steady adoption and will continue to proliferate in legal institutions. Though AI-enabled legal prediction has the potential to significantly augment human legal analyses, it also raises ethical questions that have received scant coverage in the literature. This talk focuses on one such ethical issue: the “calcification problem.” The basic question is as follows: If predictive algorithms rely chiefly on historical case law, and if lawyers and judges depend on these historically-informed predictions to make arguments and write judicial opinions, is there a risk that future law will merely reproduce the past? Put differently, will fewer and fewer cases depart from precedent, even when necessary to achieve legitimate and just outcomes? This is a particular concern for areas of law where societal values change at a rate faster than new precedents are produced. This talk describes the legal, political and ethical dimensions of the calcification problem and suggests interventions to mitigate the risk of calcification. Abdi Aidid Faculty of Law University of Toronto ► To stay informed about other upcoming events at the Centre for Ethics, opportunities, and more, please sign up for our newsletter: https://utoronto.us12.list-manage.com/subscribe?u=0e5342661df8b176fc3b5a643&id=142528a343

    Rodrigo Ochigame, Actuarialism And Racial Capitalism (Ethics Of AI In Context)

    Play Episode Listen Later Jun 7, 2021 23:54


    As national and regional governments form expert commissions to regulate “automated decision-making,” a new corporate-sponsored field of research proposes to formalize the elusive ideal of “fairness” as a mathematical property of algorithms and especially of their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of “fairness” and such legal categories as “discrimination,” “disparate impact,” and “equal opportunity.” But general aspirations to fair algorithms have a long history. This talk recounts some past attempts to answer questions of fairness through the use of algorithms. In particular, it focuses on “actuarial” practices of individualized risk classification in private insurance firms, consumer credit bureaus, and police departments since the late nineteenth century. The emerging debate on algorithmic fairness may be read as a response to the latest moral crisis of computationally managed racial capitalism. Rodrigo Ochigame History, Anthropology, & Science, Technology, and Society MIT

    Alex Hanna, Data, Transparency, And AI Ethics

    Play Episode Listen Later Jun 7, 2021 39:30


    Two centuries of dystopian thought consistently imagined how technologies “out of control” can threaten humanity: with obsolescence at best, with violent systemic destruction at worst. Yet current advances in neural networked machine learning herald the advent of a new ethical question for this established history of critique. If a genuinely conscious form of artificial intelligence arises, it will be wired from its inception as guided by certain incentives, one of which might eventually be its own self-preservation. How can the tradition of philosophical ethics approach this emerging form of intelligence? How might we anticipate the ethical crisis that emerges when machines we cannot turn off cross the existential threshold, becoming beings we should not turn off? Alex Hanna ML Fairness Google

    Julian Posada, Disembeddedness in Data Annotation for Machine Learning

    Play Episode Listen Later Jun 7, 2021 50:02


    What happens when data annotation and algorithmic verification occurs in a significantly deregulated market? Today, many AI companies outsource these essential steps in developing machine learning algorithms to workers worldwide through digital labour platforms. This labour market has experienced a race to the bottom environment where most of the workers are situated in Venezuela, a country experiencing a profound social, political, and economic crisis, with the world's highest inflation rates. This talk presents preliminary findings of ongoing research to explore how the “disembededness” of this market, in which economic activity is unconstrained (or deregulated) by institutions, affects workers' livelihoods and, ultimately, the algorithms they are shaping. The talk explores this situation through the working conditions of platform users, the composition of their local networks, and the power relations between them, ML developers, and platforms.

    Ben Green, Algorithmic Governance: The Promises and Perils of Government Algorithms

    Play Episode Listen Later Jun 7, 2021 46:55


    Governments increasingly use algorithms (such as machine learning predictions) as a central tool to distribute resources and make important decisions. Although these algorithms are often hailed for their ability to improve public policy implementation, they also raise significant concerns related to racial oppression, surveillance, inequality, technocracy, and privatization. While some government algorithms demonstrate an ability to advance important public policy goals, others—such as predictive policing, facial recognition, and welfare fraud detection—exacerbate already unjust policies and institutions. This talk will explore some of the technical, political, and institutional factors that lead to algorithmic harms and will introduce an agenda for developing and regulating algorithms in the interest of equity and social justice.

    Suzanne Kite and Scott Benesiinaabandan, Indigenous Protocols and Artificial Intelligence

    Play Episode Listen Later Jun 7, 2021 33:35


    Scott Benesiinaabandan and Suzanne Kite in conversation around their research, practice, and contributions to the Indigenous Protocol and Artificial Intelligence Position Paper.

    Elettra Bietti, Viewing Tech Ethics from Within Moral Philosophy

    Play Episode Listen Later Jun 7, 2021 40:18


    This talk argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing,” or by scholars and policy-makers in the form of “ethics bashing.” Grappling with the role of philosophy and ethics requires moving beyond simplification and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. In other words, we must resist narrow reductivism of moral philosophy as instrumentalized performance and renew our faith in its intrinsic moral value as a mode of knowledge-seeking and inquiry. Far from mandating a self-regulatory scheme or a given governance structure, moral philosophy in fact facilitates the questioning and reconsideration of any given practice, situating it within a complex web of legal, political and economic institutions. Moral philosophy indeed can shed new light on human practices by adding needed perspective, explaining the relationship between technology and other worthy goals, situating technology within the human, the social, the political.

    Devin Guillory, Combatting Anti-Blackness in the AI Community

    Play Episode Listen Later Jun 7, 2021 56:22


    The creation of Artificial Intelligence technologies is a communal act. As such, which ideas, people, and technologies are developed are deeply rooted in societal structures that are rarely questioned or thoroughly examined by AI researchers. This talk will focus on mechanisms within the AI community that perpetuate or amplify Anti-Blackness, both within our community and our greater societal structures. From research agendas and funding sources to collaborations and job opportunities, there are countless places where inequality manifest within our community. In addition to describing where and how Anti-Blackness occurs this talk will share lessons learned from community organizing within the AI community and describe some immediate steps that can be taken to build a more just community.

    Kamilah Ebrahim, The Limits of Anti-Trust Regulation

    Play Episode Listen Later Jun 7, 2021 35:08


    The current monopoly over data production, collection and information centralizes epistemic power and the capacity to accumulate economic capital through data. At the same time this process dispossesses marginalized and racialized communities from the data they are producing. The result is a dynamic that mirrors the dispossession created through colonialism in a new form of “techno-imperialism”. Current debates surrounding monopoly structures in technology tend to focus on the economic effects rather than the epistemic consequences, this talk will refocus this conversation and consider the pros and cons of anti-trust policy solutions currently being considered in Canada.

    Ishtiaque Ahmed, Whose Intelligence? Whose Ethics? Ethical Pluralism and Postcolonial Computing

    Play Episode Listen Later Jun 7, 2021 56:09


    With the unprecedented advancement of Artificial Intelligence (AI) in the last decade, several ethical concerns AI technologies have also emerged. Researchers today are concerned about bias, discrimination, surveillance, and privacy breaching in the use of AI technologies, just to mention a few. However, most of this discourse around “Ethics in AI” has become centered on western societies, and the concerns are emerging from and getting shaped by ethical values that more common in the West than in other parts of the world. To this end, my research explores this ethical concerns of AI in the context of the Global South, especially in the Indian Subcontinent. Based on my decade-long work in Bangladesh and India, I present in this talk, how data-driven AI technologies are challenging local faith, familial values, customs, and traditions, and imposing scientific rationality through various postcolonial computing practices. I further explore how a novel kind of intelligence can be imagined by incorporating local values and community participation.

    Robert Soden, Responsible AI in Disaster Risk Management: A Community of Practice Perspective

    Play Episode Listen Later Jun 7, 2021 38:30


    The use of AI, and in particular machine learning, is increasingly being taken up as part of efforts to better understand and mitigate the potential impacts of disasters like earthquakes or floods. Experts and practitioners believe that these tools can help support societal efforts to inform decisions ranging from emergency preparedness to infrastructure retrofitting and the design of disaster insurance products. Despite widespread concerns over the role of AI tools in domains such as criminal justice, banking, and healthcare, little guidance is available for experts working on the tools in the area of disasters. This talk will report on an ongoing effort by organizations including the Red Cross, the World Bank, and several academic institutions to examine the potential for negative consequences of AI in the field of disaster management.

    Muriam Fancy, Governance of Ethical AI

    Play Episode Listen Later Jun 7, 2021 32:59


    AI is not without bias; our understanding of the risks it can pose is often unknown. However, this does not stop governments from procuring and deploying AI systems for the public. This talk will present case examples of how the government procures AI systems. Furthermore, the presentation will follow with methodologies of how to ensure that governments can deploy ethical and safe AI systems. The role of the public, government, and private stakeholders are all different yet necessary to reduce the risk caused when applying AI on a mass scale. The presentation will conclude by recommending policy solutions to avert the consequences of deploying risky AI systems.

    Anne-Marie Fowler, Differentiation Is Mechanics, Integration Is Art

    Play Episode Listen Later Jun 7, 2021 34:15


    A digital “mind” is not a human mind in lesser form; rather, it is entirely, and discretely, different. As such, it has been epitomized in terms of efficient prediction rather than origin and indeterminacy. However, both the human mind and the digital mind can be considered as sites of pure conception. Drawing principally from Hermann Cohen's logic of origin, and applying an originary lens to philosophical inputs ranging from mathematics, aesthetics, and biology, I will point to an alternative modal framing of AI ethics that is potentially generative rather than solely corrective.

    Catherine D'Ignazio and Lauren F. Klein, Data Feminism

    Play Episode Listen Later Jun 7, 2021 39:13


    As data are increasingly mobilized in the service of governments and corporations, their unequal conditions of production, their asymmetrical methods of application, and their unequal effects on both individuals and groups have become increasingly difficult for data scientists–and others who rely on data in their work–to ignore. But it is precisely this power that makes it worth asking: “Data science by whom? Data science for whom? Data science with whose interests in mind? These are some of the questions that emerge from what we call data feminism, a way of thinking about data science and its communication that is informed by the past several decades of intersectional feminist activism and critical thought. Illustrating data feminism in action, this talk will show how challenges to the male/female binary can help to challenge other hierarchical (and empirically wrong) classification systems; it will explain how an understanding of emotion can expand our ideas about effective data visualization; how the concept of invisible labor can expose the significant human efforts required by our automated systems; and why the data never, ever “speak for themselves.” The goal of this talk, as with the project of data feminism, is to model how scholarship can be transformed into action: how feminist thinking can be operationalized in order to imagine more ethical and equitable data practices.

    Vinith Suriyakumar, Differentially Private Prediction in Health Care Settings

    Play Episode Listen Later Jun 7, 2021 28:42


    Machine learning has the potential to improve health care through its ability to extract information from data. Unfortunately, machine learning is susceptible to privacy attacks which leak information about the data it was trained on. This can have dire consequences in health care where protecting patient privacy is of the utmost importance. Differential privacy has been proposed as the leading technique to defend against privacy attacks and has had successful use by the US Census, Google, and Apple. This talk will present the challenges of using differentially private machine learning in health care and how future solutions might address them.

    André Brock, Black Morpheus: Race in the Technocultural Matrix

    Play Episode Listen Later Oct 19, 2020 34:53


    André Brock, Black Morpheus: Race in the Technocultural Matrix by Ethics of AI Lab, University of Toronto

    Mohamed Abdalla, The Grey Hoodie Project

    Play Episode Listen Later Oct 19, 2020 50:08


    As governmental bodies rely on academics' expert advice to shape policy regarding Artificial Intelligence, it is important that these academics not have conflicts of interests that may cloud or bias their judgement. Our work explores how Big Tech is actively distorting the academic landscape to suit its needs. By comparing the well-studied actions of another industry, that of Big Tobacco, to the current actions of Big Tech we see similar strategies employed by both industries to sway and influence academic and public discourse. We examine the funding of academic research as a tool used by Big Tech to put forward a socially responsible public image, influence events hosted by and decisions made by funded universities, influence the research questions and plans of individual scientists, and discover receptive academics who can be leveraged. We demonstrate, in a rigorous manner, how Big Tech can affect academia from the institutional level down to individual researchers. Thus, we believe that it is vital, particularly for universities and other institutions of higher learning, to discuss the appropriateness and the tradeoffs of accepting funding from Big Tech, and what limitations or conditions should be put in place (featured in WIRED, see below). Mohamed Abdala Centre for Ethics & Department of Computer Science University of Toronto Additional Resources: Mohamed Abdalla & Moustafa Abdalla, The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity https://arxiv.org/abs/2009.13676 Will Knight, Many Top AI Researchers Get Financial Backing From Big Tech, WIRED, Oct 4, 2020 https://www.wired.com/story/top-ai-researchers-financial-backing-big-tech/

    Avery Slater, Kill Switch: The Ethics of the Halting Problem

    Play Episode Listen Later Aug 6, 2020 47:40


    Two centuries of dystopian thought consistently imagined how technologies “out of control” can threaten humanity: with obsolescence at best, with violent systemic destruction at worst. Yet current advances in neural networked machine learning herald the advent of a new ethical question for this established history of critique. If a genuinely conscious form of artificial intelligence arises, it will be wired from its inception as guided by certain incentives, one of which might eventually be its own self-preservation. How can the tradition of philosophical ethics approach this emerging form of intelligence? How might we anticipate the ethical crisis that emerges when machines we cannot turn off cross the existential threshold, becoming beings we should not turn off? Avery Slater University of Toronto Department of English

    Chelsea Barabas, Beyond Accuracy and Bias: The Pursuit of “Ethical AI” in Criminal Law

    Play Episode Listen Later Aug 6, 2020 53:59


    Data-driven decision-making regimes, in the form of predictive tools like crime hotspotting maps and risk assessment instruments, are rapidly proliferating across the criminal justice system as a means of addressing accusations of discriminatory and harmful practices by police and court officials. In recent times these data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. These concerns have given rise to an influential community of researchers from both academia and industry who have formed a new regulatory science under the rubric of “fair, accountable, and transparent” algorithms, which seek to optimize accuracy and minimize bias in algorithmic decision making systems. In this talk, Barabas argues that the ethical, political, and epistemological stakes of criminal justice applications of AI cannot be understood simply as a question of bias and accuracy. Nor can we measure the impact of these tools if key outcome measures are left unexamined. She outlines a more fundamental, abolitionist approach for excavating the ways that predictive tools reflect and reinforce the punitive practices that drive disparate outcomes in the criminal justice system. Finally, she will illustrate a more transformational approach to re-imagining the way data might be used to challenge the penal ideology and de-naturalize carceral state practices. Chelsea Barabas MIT Media Lab

    Ida Koivisto, Thinking Inside the Box: Transparency in Automated Decision-Making

    Play Episode Listen Later Aug 6, 2020 42:27


    Ida Koivisto, Thinking Inside the Box: Transparency in Automated Decision-Making by Ethics of AI Lab, University of Toronto

    Regina Rini, Democracy and Social Media are Incompatible: Now What?

    Play Episode Listen Later Jun 20, 2020 56:26


    It takes time for the norms of democratic debate to adjust to new technologies – in some cases, too much time. In parts of Europe in the 1920s and 30s, change brought on by the new technology of radio outran democratic adaptation. Rini will argue that we are now at a similar inflection point with social media. Healthy democratic debate requires that we view fellow citizens as typically sincere and thoughtful when they express disagreement. Rini identifies several features of social media discourse that have rapidly undermined this presumption and weakened the authority of democratic norms. What can be done about these shifts? Rini argues that state and consumer solutions are unlikely to work. Our best hope is for social media platforms to create infrastructure enabling citizens to detect insincerity and carelessness in discourse. Regina Rini York University Philosophy

    Molly Sauter, Algorithmic Ethics and Personhood

    Play Episode Listen Later Apr 29, 2020 21:45


    As “big data”-based predictive algorithms and generative models become commonplace tools of advertising, design, user research, and even political polling, are these modes of constructing machine-readable models of individuals displacing humans from our world? Are we allowing the messy, unpredictable, illegible aspects of being human to be overwritten by demands we remain legible to AI and machine learning systems intended to predict our actions, model our behavior, and sell us something? In this talk, technology scholar Molly Sauter looks at how currently deployed modeling systems constitute an attack on personhood and self determination, particularly in their use in politics and elections. Sauter posits that the use of “big data” in politics strips its targets of subjectivity, turning individuals into ready-to-read “data objects,” and making it easier for those in positions of power to justify aggressive manipulation and invasive inference. They further suggest that when big data methodology is used in the public sphere, it is reasonable for these “data objects” to, in turn, use tactics like obfuscation, up to the point of actively sabotaging the efficacy of the methodology in general, to resist attempts to be read, known, and manipulated. Molly Sauter Communication Studies McGill University

    Richard Zemel, Ensuring Fair and Responsible Automated Decisions

    Play Episode Listen Later Apr 23, 2020 53:57


    Information systems are becoming increasingly reliant on statistical inference and learning to render all sorts of decisions, including the issuing of bank loans, the targeting of advertising, and the provision of health care. This growing use of automated decision-making has sparked heated debate among philosophers, policy-makers, and lawyers, with critics voicing concerns with bias and discrimination. Bias against some specific groups may be ameliorated by attempting to make the automated decision-maker blind to some attributes, but this is difficult, as many attributes may be correlated with the particular one. The basic aim then is to make fair decisions, i.e., ones that are not unduly biased for or against specific subgroups in the population. This episode will discuss various computational formulations and approaches to this problem. Richard Zemel Computer Science & Vector Institute University of Toronto

    John Basl & Jeff Behrends, Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles

    Play Episode Listen Later Apr 15, 2020 56:33


    Many of those thinking about the ethics of autonomous vehicles believe there are important lessons to be learned by attending to so-called Trolley Cases, while a growing opposition is dismissive of their supposed significance. The optimists about the value of these cases think that because AVs might find themselves in circumstances that are similar to Trolley Cases, we can draw on them to ensure ethical driving behavior. The pessimists are convinced that these cases have nothing to teach us, either because they believe that the AV and trolley cases are in fact very dissimilar, or because they are distrustful of the use of thought experiments in ethics generally. Jeff Behrends Harvard University Philosophy John Basl Northeastern University Philosophy

    Frank Rudzicz, The Future of Automated Healthcare

    Play Episode Listen Later Apr 10, 2020 50:14


    As artificial intelligence and software tools for medical diagnosis are increasingly used within the healthcare system generally, it will be important that these tools are used ethically. This episode will cover recent advances in machine learning in healthcare, current approaches to ethics in healthcare, likely changes to regulation to allow for increased use of AI, and new challenges, both technical and societal, that will arise given those changes. Frank Rudzicz University Health Network & Computer Science University of Toronto

    Out of Their Cages and Into the City: Robots, Regulation, and the Changing Nature of Public Spaces

    Play Episode Listen Later Mar 30, 2020 45:15


    The laws that permit, regulate, or prohibit robotic systems in public spaces will in many ways determine how this new technology impacts the space and the people who inhabit that space. This begs the questions: how should regulators approach the task of regulating robots in public spaces? And should any special considerations apply to the regulation of robots because of the public nature of the spaces they occupy? Kristen Thomasen University of Windsor Law

    Sunit Das, AI in Medicine: Hopes? Nightmares?

    Play Episode Listen Later Mar 8, 2020 53:38


    Artificial intelligence promises to change the practice of medicine, from identifying early radiographic signs of stroke to determining the most appropriate second line chemotherapeutic agent for a patient with cancer. But many of the questions around AI involving transparency, judgment, and responsibility are at the very core of the compact that grounds the place of medicine and the identity of persons in our society. In this podcast, we will explore some of the promise offered by AI to the practice of medicine, while considering the profound ethical questions raised by that promise. Dr. Sunit Das Division of Neurosurgery University of Toronto

    Joe Halpern, Moral Responsibility, Blameworthiness, and Intention: In Search of Formal Definitions

    Play Episode Listen Later Feb 16, 2020 53:39


    The need for judging moral responsibility arises both in ethics and in law. In an era of autonomous AI agents, the issue has now become relevant to AI as well. Although hundreds of books and thousands of papers have been written on moral responsibility, blameworthiness, and intention, there is surprisingly little work on defining these notions formally. We will need formal definitions in order for AI agents to apply these notions. Joe Halpern Cornell University Computer Science Department

    Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention

    Play Episode Listen Later Feb 9, 2020 39:31


    A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. Ifeoma Ajunwa Cornell University Labor Relations, Law, and History

    Tom Slee, Private Sector AI: Ethics and Incentives

    Play Episode Listen Later Feb 2, 2020 33:52


    Algorithms that sort people into categories are plagued by incompatible incentives. While more accurate algorithms may address problems of statistical bias and unfairness, they cannot solve the ethical challenges that arise from incompatible incentives. Algorithm owners are drawn into taking on the tasks of governance, managing and validating the behaviour of those who interact with their systems. The governance role offers temptations to indulge in regulatory arbitrage. If governance is left to algorithm owners, it may lead to arbitrary and restrictive controls on individual behaviour. The goal of algorithmic governance by automated decision systems, social media recommender systems, and rating systems is a mirage, retreating into the distance whenever we seem to approach it. Tom Slee SAP Toward a Handbook of Ethics of AI Centre for Ethics, University of Toronto, March 1-2, 2019

    Brian Cantwell Smith, Reckoning and Judgment

    Play Episode Listen Later Jan 22, 2020 55:53


    Brian Cantwell Smith, author of The Promise of Artificial Intelligence (MIT 2019) reflects on the limitations of AI ethics: All we are likely to build, based on anything we currently understand, are systems that reckon. Ethical deliberation, as opposed to ethical consequence, requires full scale judgment, which goes quite a bit beyond such reckoning powers. Brian Cantwell Smith iSchool, Computer Science, Philosophy University of Toronto

    Hector Levesque, Rethinking the Place of Thinking in Intelligent Behaviour

    Play Episode Listen Later Jan 20, 2020 37:58


    It seems clear that in people, ordinary commonsense thinking is an essential part of acting intelligently. Yet the most popular current approach to Artificial Intelligence downplays this thinking aspect and emphasizes learning from massive amounts of data instead. This episode goes over these notions and attempts to make the case that computers systems based even on extensive learning alone might have serious dangers that are not immediately obvious. Hector Levesque Computer Science University of Toronto

    Zack Lipton, Fairness, Interpretability and the Dangers of Solutionism (Ethics of AI in Context)

    Play Episode Listen Later Jan 16, 2020 65:56


    While the deep questions concerning the ethics of AI necessarily address the processes that generate our data and the impacts that automated decisions will have, neither ML tools nor proposed ML-based mitigation strategies tackle these problems head on. This talk explores the consequences and limitations of employing ML-based technology in the real world, the limitations of recent solutions for mitigating societal harms, and contemplates the meta-question: when should (today's) ML systems be off the table altogether?

    Ruben Gaetani, Sidewalk Toronto: Ethics in the "Smart City" (2018)

    Play Episode Listen Later Jan 14, 2020 8:55


    Ruben Gaetani's contribution to an early panel discussion of the Sidewalk Toronto Project, a collaboration of Google's Sidewalk Labs and Waterfront Toronto. Ethics in the City Jan 28, 2018 Ruben Gaetani University of Toronto Management

    Claim Ethics of AI in Context

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel