Connecting to Apple Music.
This talk will explore three case-studies of moral psychology: (1) Physical contact, such as helping, hindering, and hitting; (2) Fair and unfair distribution of resources; and (3) Violations of purity, with special focus on sexual behavior. I will review some ongoing experimental work with babies and young children that bears on the emergence of moral intuitions and motivations in these domains, and I will argue that these domains show strikingly different patterns of development. I end with an argument that developmental moral psychology is relevant to problems of normative ethics, though in a rather indirect way.
William Casebeer, Program Manager, Defense Advanced Research Projects Agency Fabrice Jotterand, Assistant Professor, Clinical Sciences & Psychiatry, Southwestern Medical Center, University of Texas
Organized by the NYU Center for Bioethics in collaboration with the Duke Kenan Institute for Ethics with generous support from the NYU Graduate School for Arts & Science and the NYU Humanities Initiative. It has been a decade since the first brain imaging studies of moral judgments by Joshua Greene, Jorge Moll and their colleagues were reported. During this time, there have been rich philosophical and scientific discussions regarding a) whether brain imaging data can tell us anything about moral judgments, and b) what they do tell us if they can tell us something about moral judgments. In this workshop, we aim to bring leading philosophers, neuroscientists, and psychologists in this area together to examine these issues and to explore the future directions of this research.
This talk will concentrate on two brain areas critical for the development of care-based morality (social rules covering harm to others). The role of the amygdala in stimulus-reinforcement learning will be considered, particularly when the reinforcement is social (the fear, sadness and pain of others). The role of orbital frontal cortex in the representation of value, critical for decision making (both care-based moral and non-moral) will also be considered. Data showing dysfunction in both of these systems and these functions and their interaction in individuals with psychopathic traits will be presented and the implications of these data for care-based (and other forms of) morality will be considered.
Neuroscientists are now discovering how hormones and brain chemicals shape social behavior, opening potential avenues for pharmacological manipulation of ethical values. In this talk, I will present an overview of recent studies showing how altering brain chemistry can change moral judgment and behavior. These findings raise new questions about the anatomy of the moral mind, and suggest directions for future research in both neurobiology and practical ethics.
Should the research on moral psychology be interpreted as suggesting new approaches for improving, or perhaps enhancing, moral intuitions, attitudes, judgments, and behavior or for reforming social institutions? Can we create more effective educational tools for improving moral development? For the last century psychiatry has attempted to medicalize moral failings - lack of self-control, addiction, anger, impatience, fear. But what of engineering ourselves to higher states of virtue? If the enhancement of morality is possible, which virtues or cognitive capabilities will it be safe to enhance and how? What might be the unanticipated side effects of attempts to enhance moral behavior?
Should the research on moral psychology be interpreted as suggesting new approaches for improving, or perhaps enhancing, moral intuitions, attitudes, judgments, and behavior or for reforming social institutions? Can we create more effective educational tools for improving moral development? For the last century psychiatry has attempted to medicalize moral failings - lack of self-control, addiction, anger, impatience, fear. But what of engineering ourselves to higher states of virtue? If the enhancement of morality is possible, which virtues or cognitive capabilities will it be safe to enhance and how? What might be the unanticipated side effects of attempts to enhance moral behavior?
Does the "is" is of empirical moral psychology have implications for the "ought" of normative ethics? I'll argue that it does. One cannot deduce moral truths form scientific truths, but cognitive science, including cognitive neuroscience, may nevertheless influence moral thinking in profound ways. First, I'll review evidence for the dual-process theory of moral judgment, according to which characteristically deontological judgments tend to be driven by automatic emotional responses while characteristically consequentialist judgments tend to be driven by controlled cognitive processes. I'll then consider the respective functions of automatic and controlled processes. Automatic processes are like the point-and-shoot settings on a camera, efficient but inflexible. Controlled processes are like a camera's manual mode, inefficient but flexible. Putting these theses together, I'll argue that deontological philosophy is essentially a rationalization of automatic responses that are too inflexible to handle our peculiarly modern moral problems. I'll recommend consequentialist thinking as a better alternative for modern moral problem-solving.
In Unfit for the Future: The Need for Moral Enhancement (OUP, forthcoming 2012) Julian Savulescu and I argue that in order to solve the greatest moral problems of the present time, like anthropogenic climate and environmental deterioration and global inequality, it is necessary to morally enhance human beings, not only by traditional means but also, if possible, by biomedical means. Some, like John Harris, have replied that moral enhancement by biomedical means would undercut our freedom and, so, would not really increase our moral value. I believe that this objection is mistaken, that these means would undercut neither our freedom nor our rationality. However, what I shall mainly discuss in my presentation is a reply which grants that this is so, that genuine moral enhancement could be produced by biomedical means. What I shall discuss is Nicholas Agar's argument in Humanity's End (MIT, 2010) to the effect that it is morally permissible for human beings to prevent the creation of morally enhanced people because this could harm the interests of the unenhanced. I argue that this argument fails because it overlooks the distinction between morally permissible and impermissible harm. The harm that the enhanced would cause the unenhanced would be permissible harm, and it is not permissible to prevent such harm.
The invention of a drug that enhanced moral judgment would raise a number of legal issues. What should the impact be of taking the drug, or refusing or being unable to take it, on criminal and civil liability? Could taking it be a condition of parole? How safe would such a drug have to be to be approved by the FDA, and how would the agency weigh risks and benefits? Under current law, to what extent could the government require people to take it? Could parents be required to give it to their children? If not, should the law be changed? Would the drug be covered under third party health insurance programs, including the Obama health reform plan, and if not, should it be? Are there certain types of persons who should not take it, such as soldiers, in whom it might interfere with the duty to follow lawful orders? How should the availability of the drug affect international law?
Many psychologists and philosophers are attracted to the idea that intuitions are heuristics, a kind of mental short-cut or rule of thumb. As a result, many think that the issue of whether intuitions are reliable is just the issue of whether heuristics are reliable. In this paper, I argue that there are reasons to suspect that intuitions are heuristics. I consider the implication of this point for the debate concerning the reliability of intuitions and for those who hold some kind of dual-process model of moral judgment.
Should the research on moral psychology be interpreted as suggesting new approaches for improving, or perhaps enhancing, moral intuitions, attitudes, judgments, and behavior or for reforming social institutions? Can we create more effective educational tools for improving moral development? For the last century psychiatry has attempted to medicalize moral failings - lack of self-control, addiction, anger, impatience, fear. But what of engineering ourselves to higher states of virtue? If the enhancement of morality is possible, which virtues or cognitive capabilities will it be safe to enhance and how? What might be the unanticipated side effects of attempts to enhance moral behavior?
Mental state reasoning is critical for moral cognition, allowing us to distinguish, for example, murder from manslaughter. I will present neural evidence for distinct cognitive components of mental state reasoning for moral judgment, and investigate differences in mental state reasoning for distinct moral domains, i.e. harm versus purity, for self versus other, and for groups versus individuals. I will discuss these findings in the context of the broader question of why the mind matters for morality.
Should the research on moral psychology be interpreted as suggesting new approaches for improving, or perhaps enhancing, moral intuitions, attitudes, judgments, and behavior or for reforming social institutions? Can we create more effective educational tools for improving moral development? For the last century psychiatry has attempted to medicalize moral failings - lack of self-control, addiction, anger, impatience, fear. But what of engineering ourselves to higher states of virtue? If the enhancement of morality is possible, which virtues or cognitive capabilities will it be safe to enhance and how? What might be the unanticipated side effects of attempts to enhance moral behavior?
In some cases, moral behavior seem to be fully commendable (only) when the subject performs it wholeheartedly, without conflict between or among counter- or pro-moral beliefs and counter- or pro-moral aliefs. But in others, (perhaps those where moral demands at different levels pull in different directions) moral behavior seems to be fully commendable (only) when the subject experiences a conflict between pro-moral beliefs and pro-moral aliefs where the latter -- generally pro-social -- response is morally overridden in this (exceptional) circumstance. In still others, moral behavior seems to be fully commendable when it occurs as a result of the agent's overcoming certain counter-moral aliefs or beliefs. What sorts of systematic patterns do these cases exhibit, and how do they connect to Tetlock's work on tragic and taboo tradeoffs, Williams' work on "one thought too many" and "residues", Kant and Arpaly on enkratia and (reverse) akrasia, and recent work in neuroscience?
Should the research on moral psychology be interpreted as suggesting new approaches for improving, or perhaps enhancing, moral intuitions, attitudes, judgments, and behavior or for reforming social institutions? Can we create more effective educational tools for improving moral development? For the last century psychiatry has attempted to medicalize moral failings - lack of self-control, addiction, anger, impatience, fear. But what of engineering ourselves to higher states of virtue? If the enhancement of morality is possible, which virtues or cognitive capabilities will it be safe to enhance and how? What might be the unanticipated side effects of attempts to enhance moral behavior?
Different kinds of moral dilemmas produce activity in different parts of the brain, so there is no single neural network behind all moral judgments. This paper will survey the growing evidence for this claim and its implications for philosophy and for method in moral neuroscience.
Medical research ethics, with its focus on preventing participants from being exploited or exposed to undue risk, has almost entirely neglected researchers' ancillary-care obligations-their obligations to provide participants with medical care that they need but that is not required in order to carry out the researchers' scientific plans safely. To begin to develop a theoretical account of researchers' ancillary-care obligations, this paper develops and explores the more general idea of moral entanglements. Of interest in its own right, this category comprises the ways in which, through innocent transactions with others, we can unintendedly accrue special obligations to them. More particularly, the paper explains intimacy-based more entanglements, to which we become liable by accepting another's waiver of privacy rights. Sometimes, having entered into other's private affairs for innocent or even helpful reasons, one discovers needs of theirs that then become the focus of special duties of care. The general duty to warn them of their need cannot directly account for the full extent of these duties, but does indicate why a silent retreat is impermissible. The special duties of care importantly rest on a transfer of responsibilities that accompanies the privacy waivers. The result is a special obligation of beneficence that, like researchers' ancillary-care obligations, is grounded in a voluntary transaction despite not having been voluntarily undertaken.
Illicit drug use is thought to pose a public health problem for several possible reasons. Here I discuss one such reason: the supposed causal relation between illicit drug use (especially so-called "hard" illicit drugs) and violent crime. The hypothesis that drugs are causally connected to crime is also a favorite basis on which to argue that drug use should be criminalized. This hypothesis is difficult to test empirically. I build on some resent criminological findings of Frank Zimring fron New York City to suggest that many theorists have tended to overstate the drugs-crime connection. In New York City, violent crime has decreased greatly while "hard" illicit drug use has remained constant. As Zimring concludes, it is possible to make enormous progress in the war on crime without making any headway in the war on drugs. I examine the implications of these recent findings for debates about drug criminalization.