Can social groups have beliefs, plans, or desires in their own right? Could they even have minds, emotions, or conscious experiences? How would we tell, and how do our experiences as participants in collective processes relate to what we should say about collectives themselves?
This topic directly connects to my work on mental combination: in a sense, I argue that we are each a collective mind already. More specifically, chapters 6 and 7 of Combining Minds defend the possibility of literal group consciousness: if brains can be conscious, so can suitably-interacting sets of brain. However, whether any current social groups are actually conscious will depend on what the right theory of consciousness is: if panpsychism is true then they probably are conscious, while on other theories it’s much less clear. In either case, it’s worthwhile examining the analogies and disanalogies between mind-like dynamics on different scales.
My other work on this topic falls into two broad clusters: some which analyses the specific forms of collective intentionality that currently structure our societies, and some which explores the potential ways that future technology might enable forms of social coordination that would blur the boundaries between minds and complicate the distinction between individual minds and collective minds. The latter connects closely with my work on AI ethics.
In the first cluster:
“Implicit Coordination: Acting Quasi-Jointly on Implicit Shared Intentions.” (co-authored with Judith Martens, 2018, Journal of Social Ontology 4 (2): 93–120; presented in talks at the University of Vienna and University College Cork; received the 2018 Social Ontology Essay Prize)
A paper about how social groups often exercise a sort of quasi-agency, in which members' actions are somewhat causally interdependent, and aim at somewhat overlapping goals, but are not explicitly coordinated: a sort of unconscious-but-still-intentional working together.
“Joint Mentality and Quasi-agential Groups” (in progress; presented in video form at the 2016 Minds Online Conference, BrainsBlog)
A paper about applying intentional explanations - X did A, because they wanted B, and believed that A would lead to B - to decentralised collections of people.
"Social Construction as Social Agency" (in progress)
A paper asking, essentially, who it is that does the 'constructing' when something is socially constructed - in particular, who is responsible for constructing oppressive categories and systems? I argue that it's misleading both to attribute agency too narrowly, to one specific identifiable demographic or institution, and to attribute it too widely, to an undifferentiated 'us'. Rather, a cluster of overlapping collective agents exert different degrees of agency and bear different degrees of responsibility. (This links up with my work on gender)
"We-First: Knowing and Perceiving Interactions from Within." (in progress)
A paper about the phenomenology of social interaction, and the possibility that we might have an awareness of the interaction as such (partly introspective and partly perceptual) that’s prior to distinct awareness of either our own contribution or the other’s. (This links up with my work on empathy.)
In the second cluster:
- 2017. “Rational Agency without Self-Knowledge: Could ‘We’ Replace ‘I’?” (2017, Dialectica 71 (1):3–33; presented in talks at the Society for Exact Philosophy and South Carolina Philosophical Association)
A paper about the common assumption that being a rational, reflective agent requires being self-aware, knowing the individual you are. I argue that there is at least one alternative: knowing the connected group you belong to. However, this alternative would require extremely intimate relations among all members of this group, likely impossible without the use of neurotechnology or AI.
“When Does Thinking Become Communication? Humans, AI, and Porous Minds” (in progress, for Communication with AI: Philosophical Perspectives, eds. R. Sterken and H. Cappelen)
A paper arguing, first, that there's only a difference of degree between a single agent engaged in internal reflection and a group of many agents communicating, and, second, that future technology is likely to create systems whose informational processes are halfway between communication and internal reflection.
I also have two papers in progress on the possibility that colonies of eusocial insects - ants, bees, wasps, and termites - might be conscious whole in addition to their individual members being conscious. (This links up with my work on the extent of consciousness in nature.)
"Anty-Nesting: Why Eusocial Insect Colonies are Probably Conscious." (In progress)
An argument that eusocial insect colonies are probably conscious, given plausible conceptual claims and some popular criteria for consciousness.
"Combinationism and Eusocial Insects: What Is It Like to Be a Hivemind?" (In progress, for Are Ant Colonies Conscious?, ed. A. Fonseca)
A discussion of what a colony's consciousness might be like: its variable timescales, borderline unity, embodiment, affective character, presence or absence of sensory images.
Some of these ideas are laid out in a post at 'The Daily Ant', here. Some are also sketched out in a talk I gave at CUNY; slides here)