top of page

Ethics of AI

 

How should we fit artificial intelligences into our moral frameworks? Will they be equal persons, or mere tools, or minds like those of animals, or something else? Will they be discrete individuals, or a single internet-based global collective, or something in between - or will they shift between different modes of existence so easily that we have to rethink the very category of an individual?

Im interested in this topic partly because I’m interested in the moral value of consciousness, and partly because I’m interested in how we can empathise with AIs or allow them to empathise with us. But most fundamentally I think that understanding mental combination is going to even more important for thinking about AI than it is with humans, because the boundaries between AI minds may be much weaker than those between humans. The framework of 'functionalist combinationism' outlined in Chapter 5 of Combining Minds is intended to structure how we think about individually intelligent parts of intelligent wholes, including artificial ones.

My work on this topic is currently centred on my collaboration with Jeff Sebo on the ethics of connected minds. As part of NYU’s new Mind, Ethics, and Policy program, we are working on three papers on the ethical import of mental combination:

  • Overlapping Minds and the Hedonic Calculus

  • Autonomy, Self-Preservation, and Networked Minds

  • Mereological Virtue: Being Good by having Good Parts?

This project is also sponsoring events on the philosophy of AI, starting with this talk by David Chalmers.

I’m particularly interested in the ways that AI and neurotechnology might blur the boundaries between individual and social, a topic that connects with my work on collective intentionality.

  • Chapter 8 of Combining Minds discusses this question as an extended thought-experiment about what it might be like for two minds to become (parts of) a single mind, through brain-linking technology.

A paper about the common assumption that being a rational, reflective agent requires being self-aware, knowing the individual you are. I argue that there is at least one alternative: knowing the connected group you belong to. However, this alternative would require extremely intimate relations among all members of this group, likely impossible without the use of neurotechnology or AI.

  • “When Does Thinking Become Communication? Humans, AI, and Porous Minds” (in progress, for Communication with AI: Philosophical Perspectives, eds. R. Sterken and H. Cappelen)

A paper arguing, first, that there's only a difference of degree between a single agent engaged in internal reflection and a group of many agents communicating, and, second, that future technology is likely to create systems whose informational processes are halfway between communication and internal reflection.

bottom of page