Mind, Matter, and Metabolism
CUNY Graduate Center &
University of Sydney
To appear in the Journal of Philosophy.
This paper is about the relevance to philosophy of mind of some biological topics – the nature of life in general, the evolution of animal life in particular. I'll look especially at a cluster of questions about qualia, consciousness, and the "explanatory gap." My overall goal is to develop a picture in which the basis of living activity in physical processes makes sense, the basis of proto-cognitive and then cognitive processes in living activity makes sense, and the basis of subjective experience in metabolically situated cognitive processes also makes sense. I think that working through all the links will make a difference; things won't look the way they do when we just ask: how can consciousness exist in a physical system?
Some steps within this project are made here.<![if !supportFootnotes]><![endif]> The early sections discuss, drawing on recent biology and biophysics, how living activity relates to its physical basis, and then the relation between living activity and cognitive capacities in simple organisms. I try in these sections to change our picture of the material basis of the mind. Thinking about life in general only goes so far, though. The next part of the paper is about the kind of life seen in animals, especially the role of nervous systems and stages in animal evolution that have particular relevance to the evolution of subjective experience. I then turn to the possibility of mental states existing in non-living systems, especially computers. I argue against commonly-held views about the "multiple realizability" of mental states of the kind seen in humans. Debates about multiple realizability are often understood in terms of a question about the importance of "hardware" features of a system as opposed to its functional organization. However, a rejection of familiar doctrines of multiple realizability can be developed within functionalism.
I. Changing views of life and mind
It was once common to think of life as a sort of bridge between the mental and physical. Aristotle's view of the different kinds of soul is a position of this kind. Descartes, in contrast, asserted a mechanistic view of life and isolated the mental/physical relation as the fundamental problem. Later writers within materialist and "emergentist" programs revived claims of continuity between life and mind – these include Spencer, Lewes, Dewey, and Broad. Most of the classic works developing forms of the "identity theory" in the 1950s and 1960s tended away from an engagement with this side of biology, though Herbert Feigl's paper "The 'Mental' and the 'Physical'" (1958) is an important exception which defends a view with similarities to mine. The "second generation" of identity theories, those of Armstrong and Lewis, can be seen as moving towards a denial of the importance of living activity per se.<![if !supportFootnotes]><![endif]>
The landscape was affected in a more profound manner by the rise of artificial intelligence (AI). This work seemed to show that some aspects of cognition are mechanizable in principle, and mechanizable in a non-living system. There seems no question of life being present in a classical AI system, whether attached to a robot or not, and given that there seems a real possibility that such a system might realize all of mentality, there can't apparently be too close a link between life and mind. Computation, rather than life, became the bridging concept between mental and physical.
Another development pushing the same way was a change in the understanding of life itself. There has been a partial deflation of the concept of life, especially when we compare its roles at earlier stages in the history of biology. The following interpretation is by no means universally accepted, but I think the situation looks something like this: we have theories of the different things that living systems do – they maintain their organization, using energy and other raw materials, at least many of them perceive and behave, and they develop, reproduce, and evolve. Our understanding of those activities is a "theory of life" of a sort that removes any appearence of a large-scale problem that might motivate vitalism. But there is no need to say much about which of these activities, or which combination of them, comprises life. As a result, biology textbooks feel able to say a few general things in an early chapter about what living systems characteristically do, and perhaps why these activities tend to cluster, without taking much of a stand on the nature of life. The textbook will go on to say more definite things about each of these activities; the books are not relaxed about all biological concepts, but they are about life.<![if !supportFootnotes]><![endif]> This development makes it seem even less appealing to use life as a load-bearing concept in treatments of the mind-body problem. As life has become less mysterious, it has become less important as a tool.
All these developments (around computers, and life) are reasonable, but I think that some of what has resulted is a wrong turn. I'll argue here for the relevance of some of the biological features associated with life to the mind-body problem, especially the explanation of subjective experience, the aspect of the mind-body relation where many believe there is an "explanatory gap."<![if !supportFootnotes]><![endif]> My aim is not to put the issue entirely to rest in this paper, but to narrow the gap and change the framing of the discussion.
Above I introduced the problem of subjective experience. This is often now identified with the problem of consciousness. Some earlier discussions distinguished three main problems in philosophy of mind: qualia, consciousness, and intentionality. The problem of qualia was seen as the problem of explaining the first-person feel of the mental, in its broadest sense, while consciousness was seen as a sophisticated kind of cognition with a special qualitative side. More recently the problems of "qualia" and "consciousness" are often grouped together as one problem, or handled with a distinction between different kinds of consciousness. If there is something it feels like to be a system, then the system is said to have a kind of consciousness (perhaps "phenomenal consciousness").<![if !supportFootnotes]><![endif]>
The earlier set-up had advantages. The notion of qualia, despite its awkward name, accommodated the possibility that there might be some sort of very diffuse feeling present in a system, a minimal form of experience different from anything that would usually be called "consciousness." To some extent this is a verbal issue and to some extent it is not. If a person is skeptical about the idea of feeling, in the broad sense above, existing in an organism that does not have a complex neural organization similar to our own, then the newer framework may seem natural. If they think, instead, that simple forms of feeling probably exist in organisms with nervous systems very different from ours, and these forms of feeling are the "ground floor" for an understanding of subjective experience, then a framework that distinguishes the problems of qualia and consciousness will make more sense. Each terminology can capture all the possibilities, but each is more naturally suited to a particular view. Pain is perhaps the best case for motivating a divide between a broad sense of subjective experience and consciousness. I wonder whether squid feel pain, whether damage feels like anything to them, but I don't see this as wondering whether squid are conscious. Consider also the states Derek Denton calls the "primordial emotions" – bodily feelings which register important metabolic conditions such as thirst, the need for salt, or the feeling of not having enough air.<![if !supportFootnotes]><![endif]> Like Denton, I view these as candidates for being the most basic states that can feel like something to an organism. Accordingly, I'll use the phrase "subjective experience" for the broadest category of phenomena here, also describable by saying that some states of some systems feel like something to the system itself, and others do not. The problem is explaining the physical basis of subjective experience in this sense.
II. Matter at the scale of metabolism
I said in the previous section that life has become something like a cluster-concept. This cluster has two main parts, or poles. Life has a metabolic side, and a side that has to do with reproduction and evolution. Living systems maintain their organization in the face of thermodynamic tendencies towards disorder and decay, by taking in raw materials and using sources of energy to control chemical reactions. They also reproduce and evolve. There's no point in asking whether a system has to do both these things to be alive, whether either is sufficient, or one is primary. That's a misguided question. The theoretical connections, instead, are something like this. Metabolisms are shaped through evolutionary processes. This involves a role for reproduction as opposed to mere persistence; a metabolic system that can multiply its instances can evolve in ways that a non-reproducing system cannot, as the proliferation of any improvement creates many independent platforms on which further innovation can occur. Among metabolizing systems, then, those that can reproduce will become more complex and orderly as well as more common. Reproduction requires control of energy somewhere in the system, though not always direct control by the reproducer itself. Given this picture, there are tight evolutionary connections between metabolism and reproduction, but no impediment to seeing them as different things.
The metabolic side of life, in a broad sense of that term, is the side that is important in this paper. (So when I talk of "organisms" below, viruses and their non-metabolizing relatives are not included.) Let us now look at what metabolism is like, especially at its physical basis.<![if !supportFootnotes]><![endif]> Metabolic processes in cells occur at a specific spatial scale, the scale measured in nanometers – millionths of a millimeter. They also take place in a particular context, immersed in water. In that context and at that scale, matter behaves differently from how it behaves elsewhere. In a phrase due to Peter Hoffman, what we find is a molecular storm. There is unending spontaneous motion which does not need to be powered by anything external. Larger molecules rearrange themselves spontaneously and vibrate, and everything is bombarded by water molecules, with any larger molecule being hit by a water molecule trillions of times per second. Electrical charge also plays a ubiquitous role, through ions dissolved in the water and charged regions of larger molecules. The parts of a cell that do things in the usual sense – making proteins, for example – are subject to forces much stronger than the forces they can exert. The way things get done is by biasing tendencies in the storm, nudging random walks in useful directions, thereby getting a consistent upshot out of vast numbers of mostly meaningless changes. The metabolisms of even the simplest known cells are also very complex, with many hundreds of chemicals involved. Some are more complex than others, but there are no known simple metabolisms.
Some, though not all, commentators hold that it is inaccurate to think of the arrangements within cells as machines. The chemist Peter Moore, whose article "How to Think About the Ribosome" (2012) is one of my sources in this section, titles a part of his paper: "Macromolecular Devices Are Not Machines." Moore thinks that a machine is a one kind of organized physical object, in which low-level interactions are predictable and parts are tightly coupled. A storm-like collection of random walks influenced by friction, charge, and thermal effects, in contrast, is non-mechanistic. The biophysicist Peter Hoffman, in contrast, embraces talk of machines in his account of the activities within cells: the nanoscale is the scale at which "machines run themselves." Hoffman and Moore do not differ, as far as I can see, in their views of what is going on within cells; they differ about the actual or useful boundaries of terms like "machine" and "mechanistic," with Hoffman understanding these terms in a broader way than Moore.
Metabolism happens to operate at this special spatial scale, but does it need to be that way? Metabolisms are now very complex, I said, but they, too, surely don't need to be? Surely they once were not complex, as they evolved from a simple initial form?
Beginning with the last of these questions, the assumption that life must start from "simple beginnings" (in Charles Darwin's phrase) is often accepted, but recent work suggests that this picture is erroneous. My arguments on this point are based on an interpretation of work surrounded by great deal of uncertainty, but a picture now emerging may be as follows. It's probably not true that present-day metabolisms evolved from very simple ones. There probably never were any metabolisms that were simple in the way that older models of the origins of life are simple. Those older models assumed that a few crucial reactions made a first form of life possible. The newer picture holds that the transition at the origin of life went not from simple to complex, but from disorderly to orderly.<![if !supportFootnotes]><![endif]> Disorderly and complex chemical systems gave rise to more orderly complex ones, featuring regular cycles. A reason to believe this comes from the inevitability of side-reactions in chemical systems.<![if !supportFootnotes]><![endif]> Simple metabolic models use imaginary chemistries in which each part has only one or two effects. In real chemistry, the parts have many effects of different sizes. The evolution of life was a matter of channeling and taming this sea of interactions, not taking a few simple interactions and stringing them together. Once there are basic metabolisms, they may become more complicated. But, again, the simplest ones are themselves complicated, and there's a good chance things have always been that way.
A second question raised above was as follows: Metabolism happens to operate at the nanoscale in a molecular storm, but does it need to? How contingent are the special features of material interaction in living systems? I won't address this by asking a series of "logically possible? nomically possible?" questions. Logical possibility is too weak a constraint in this context. Nomic possibility may be an important issue, but one that is difficult to address directly, and I will approach it by asking in a more informal way how hard it would be for things to be different, given the nature of matter and how matter comes to be laid out on planets.
The physical features of metabolism discussed above may be very far from accidental. Hoffman argues that the scale and chemical context seen in actual present-day metabolisms is the only place where we will find devices of the relevant kind that can work "completely autonomously." At this level there is spontaneous motion, and the relations between different forms of energy (chemical, kinetic, electrostatic) are such that a lot can happen through the transformation of one form of energy into another. At smaller or larger scales, these complex, partly orderly, and spontaneous processes do not occur. It would at least be very difficult, then, for life to arise outside this scale and context. Life could not have arisen in a dry-land macroscopic realm, on the scale of familiar machines. Perhaps once life exists in a "chemically easy" form, artifactual systems can be made that have different relations to energy and self-maintenance – I'll return to issues of this kind later. The message I am trying to emphasize, though, is that things are more constrained in this area than quick acts of imagining would suggest.
Some issues around the "mind-body problem" are about how any sort of mind could be physically realized. Others are more concerned with our minds, and their actual physical basis. This paper is intended to motivate claims of both kinds. First I will make a point about our own case, human minds, using the material on the table so far. This is a critical point about arguments against materialism based on conceivability, and the apparent separability of the mental and physical, as seen in writers such as Nagel, Kripke, and Chalmers. One kind of argument begins with the fact that it seems that we can conceive of an exact physical duplicate of an ordinary human, where the duplicate does not have any subjective experience. It's said that this exercise shows the separability of mental and physical, and hence the failure of materialism.<![if !supportFootnotes]><![endif]>
The ideas in this paper are not needed to reject those arguments. An adequate general reply is that although there is indeed an imaginative act we can engage in that shows this apparent separability, it can be diagnosed as arising from idiosyncrasies of the imagination, especially from the separability of what Nagel (op. cit.) has called "sympathetic" and "perceptual" imagining. Sympathetic imagining is imagining being something; perceptual imagining is imagining seeing it. If materialism was true, it would still seem false, because of the way our imaginations work: we can imaginatively detach a sympathetically imagined feeling from any perceptually imagined material basis.<![if !supportFootnotes]><![endif]> This reply does not make use of ideas about biology like those above. But those ideas do add something. In us, the material basis for mental activity is tied to cells and metabolism. When we look at what's actually going on in our bodies and brains, we find that many of the imaginatively familiar features of the physical are not present. Many of the features of the physical that strike our imagination in a way that seems un-mental are not present. And it is difficult to imagine the crucial processes at all, hard to get any sort of intuitive handle on what they are capable of. Arguments against materialism based on conceivability rely on the trustworthiness of intuitions about what the particular physical processes inside us can produce. Once we see what those physical processes are actually like, the trustworthiness of the crucial intuitions is much reduced.
This point can be brought into contact also with an earlier argument against materialism, Leibniz's "mill" argument. Leibniz says it is impossible in principle to give an explanation for "perception" and other mental states in terms of mechanical processes. We can see this by considering a thought-experiment:<![if !supportFootnotes]><![endif]>
[W]e must confess that perception, and what depends upon it, is inexplicable in terms of mechanical reasons, that is through shapes, size, and motions. If we imagine a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters a mill. Assuming that, when inspecting its interior, we will find only parts that push one another, and we will never find anything to explain a perception. And so, one should seek perception in the simple substance and not in the composite or in the machine.
LeibnizÕs mill was a macro-scale object, and the causal relations he describes are characteristic of that scale. An aqueous nano-mill would be a very different place. If we were observers of a living system at an intracellular scale, we would see some "parts that push one another," but not in the manner of macroscopic machines, and we would not only see pushes. We would see a storm of activity biased by charge and shape, generating partially random walks that, on average, tend in orderly directions. The processes are more causally holistic, noisier – more a matter of "herding molecular cats" – than a push-pull model allows.
Explaining how the whole process amounts to human "perception," as Leibniz asked, still requires working at a different level of description from the intracellular, but what seemed to be an obvious antipathy between mental and physical is much reduced. Our immediate intuitive response to the scene would surely tend towards panpsychism, if anything. That would be another over-reaching; we would not be learning something about matter in general from seeing how a living system works. And regardless of where our intuitions might be led, macroscopic machines provide a poor model for the material basis of living activity, and for the material basis of mental activity in living beings like us.
III. Life and cognition
I now turn to another side of the "bridging" role that biology may play in this area. This involves the link between living activity (in the metabolic sense) and the mind.
Starting with some obvious facts: all the systems we know of that are clear and uncontested cases of systems with minds are also living systems. The same is true of nearly all the usual contested candidates for having a mind – simple animals. The exceptions to this generalization about contested cases are sophisticated AI systems. A converse principle is also true: all known (metabolically) living systems engage in some cognitive or proto-cognitive processes. The term "proto-cognitive" will be discussed further below, but the activities I see as proto-cognitive include (at least) sensing events and responding to them in a way that helps keep the system alive.
Before discussing generalizations about life and cognition further, I'll look at proto-cognitive capacities in bacteria. Bacteria (and archaea, which are superficially similar but distant in evolutionary terms) are the simplest known organisms with metabolisms. They do a considerable amount of sensing and responding to events around them. I'll divide what they do into two main categories, one that involves gene regulation and another that contains everything else.
First, many of the control processes seen in bacteria work through the genome, by the regulation of gene expression. The output of these systems is chemical, rather than "behavioral" in the usual sense. Genetic systems in all organisms work through processes with a quite strongly computational character, featuring cascades of interactions that can be described in terms of ands, ors and nots. This may look like an immediate help to my case, but that is not so straightforward. A person might say that computation is the crucial concept here, and computation is seen both inside and outside living systems. Computation is important in gene action and important for thinking, but those are separate matters and computation does not have any essential connection to life in general. That would be a reasonable response to what's on the table so far, and I think it shows that describing the biological role of computation, in an ordinary sense of that term, is not enough. But what we see in basic kinds of metabolic life is something more specific than computation in that sense, and also something more specific than mere sensitivity to external stimuli. It is the use of sensing and responding, often coordinated with boolean or boole-approximating operations, to maintain the integrity of a system and its activity, seeking and maintaining some states while avoiding others. A collection of ands and if-thens with no metabolic point to them would be a different sort of thing. When the genome is used to adaptively control the synthesis of metabolically important chemicals by tracking conditions in the external environment, that is proto-cognitive in the sense I have in mind.
The second category consists of proto-cognitive control by means of devices that are not immediately dependent on the genome for their operation (though they do depend on it for their construction). A good example is chemotaxis (movement towards or away from chemicals) in the bacterium E. coli. This system makes use of memory; swimming choices at each time-step are controlled by a comparison made between the levels of good or bad chemicals that are presently sensed and the levels sensed a few seconds before. If conditions are improving, the cell swims straight. If they are getting worse, the cell takes random "tumbles."<![if !supportFootnotes]><![endif]>
As with the genome-based mechanisms discussed above, an activity like this is proto-cognitive in a sense that does not merely involve a relation to computation. Simple organisms sense and respond to events, both internal and external, in a way that implements a distinction between states and outcomes that are sought and maintained and other states and outcomes that are avoided. Boundaries between the system and its environment are controlled. With proto-cognition in this sense comes a kind of minimal subjectivity. Simple organisms have a "point of view" in a richer sense than, say, a digital camera, which also senses and computes, but does not control its boundaries and maintain a metabolism.
Is proto-cognition of this kind always present in cells? Even if it is always present now, does it need to be? Perhaps proto-cognitive activity is a good idea for any (metabolically) living thing, and hence it readily evolves, even though it has no necessary connection to the metabolic side of life. If a metabolically active system had an easy enough environment, might it get away with none of this, at least until smarter competitors evolve? How lacking in proto-cognition could a viable living system be? Locomotion, for example, is optional for bacteria, not essential, though the majority of bacteria apparently do it.
If we look at reasonable candidates for the simplest organisms known, the bacteria called Mycoplasma, we find that they do engage in adjustment of metabolism to external events. One of the findings taken to be quite striking when these organisms were recently studied was the dynamic nature of their gene regulation. Mycoplasma are not ideal examples because they went backwards with respect to complexity from ancestors with larger genomes. They are not remnants of old forms, but cases of reduction due to a parasitic lifestyle.<![if !supportFootnotes]><![endif]> More generally, as far as I know, the genomes of all bacteria sequenced thus far (including Mycoplasma) include at least some genes for "signal transduction" systems, which regulate cell metabolism by reacting to external circumstances.<![if !supportFootnotes]><![endif]>
Perhaps there are actual-world cases which approximate being metabolisms with no proto-cognitive adjustment of activity to conditions. It would take a lot to show that there never were any organisms like this – to show that the only ways to maintain a viable metabolism include processes that can, on independent grounds, be considered proto-cognitive. What is known, at least, is that proto-cognition is widespread in bacteria and present also in archaea. The bacteria/archaea split is the oldest known evolutionary split between kinds of life on earth, dating from something like 3.5 billion years ago (though archaea, which have been studied less, appear to have fewer signal transduction systems than bacteria).<![if !supportFootnotes]><![endif]>
So at present there is both theoretical and empirical uncertainty about how closely proto-cognitive activities and metabolism are connected. By "theoretical" uncertainty, I refer to uncertainty about where the boundaries of the proto-cognitive lie. This boundary will not be sharp, and non-competing broader and narrower concepts will probably be defensible, but a better specification than the one I've been using here ought to be possible. It is probably a mistake to restrict proto-cognition to capacities that implement a flow from sensors to effectors, especially if sensing itself is restricted to "exterosensing" or tracking external conditions. I have emphasized those capacities here because they present a clear case, and the easiest to use when arguing for the role of proto-cognitive activities in prokaryotes. The concept of proto-cognition should probably be broadened, however, to include at least some activities that achieve purely internal coordination – the spatial or temporal coordination of actions across parts of a system, as opposed to coordinating actions with external conditions.<![if !supportFootnotes]><![endif]> Aside from this theoretical uncertainty, we also need to know the lower limits on proto-cognitive control in real metabolisms. One view that might be defended is that proto-cognitive abilities are distinct from metabolic life, but are a natural and expected addition, something living systems quickly gain. Another possibility is that the two are more inextricably tied together. Closer ties might be present in some particular kinds of life. In multicellular organisms, a great deal of signaling between cells goes into the making of the body itself and keeping that body running. "Cell-to-cell signaling" is not merely a matter of one cell affecting another; it involves interaction between specialized producers and receptors of evolved signal molecules.
Even if proto-cognition and metabolism are extricable in principle, they are tightly connected in organisms like us. The line between the "information processing" side of human brain activity and the metabolic side is porous. One example from many that could be given from recent neuroscience concerns the role of the diffusion of small molecules, like nitric oxide, through the brain. These molecules, which affect plasticity at synapses and the distribution of receptors for neurotransmitters, move not only between neurons, but are released and taken up by blood vessels and glial cells as well.<![if !supportFootnotes]><![endif]> In addition, in living systems the active structures change continually just from being used – they reflect their immediate history and are weakly affected by what many other parts are doing. When the same neuron is exposed repeatedly to the same stimulus, it does not simply reproduce the same pattern of firing, but behaves differently each time. This sensitivity to history is not mere "noise," but raw material on which adaptive plasticity can be built.<![if !supportFootnotes]><![endif]>
IV. Animal life
The earlier sections of this paper outlined a view of material basis of life, and the relation between life and the beginnings of cognition and subjectivity. Some progress on the mind-body problem can be made through this general reshaping of the terrain, but it can only take us so far. The points about life made above apply as much to bacteria and plants, which are usually seen as lacking all subjective experience, as to animals like us. How far these general points take us depends on questions about continuities and discontinuities. One option at this stage is a radical view, albeit one that can be introduced in an innocuous way by combining the biological facts above with a simple form of functionalism. This form of functionalism holds that the "mental" has a qualitative side and a cognitive side, and they are closely tied together; the qualitative is just what the cognitive feels like from the inside. Cognition exists in simpler and more complex forms, and we saw that minimal forms are present in unicellular organisms. Perhaps, then, we should embrace a gradient with respect to both the cognitive and qualitative. Some form of subjective experience will then be present in all living things.
This view can be called biopsychism. (The term was introduced by Ernst Haeckel in 1892, with a meaning close to what I have above.)<![if !supportFootnotes]><![endif]> Biopsychism is one of a family of radical options, other members of which have seen extensive recent discussion. Classical panpsychism, which holds that all matter has a mental character, has been explicitly defended by Strawson, Goff, and others, and sympathetically discussed by Nagel and Chalmers.<![if !supportFootnotes]><![endif]> A variant on panpsychism has also been developed by the neuroscientist Guilio Tononi, based on a measure of "informational integration." This view holds that all systems containing interactions that can be described in terms of information flow have some amount of consciousness, provided that there is some degree of "integration" of this information, a criterion met by simple non-living switching devices, smartphones, and the like. According to biopsychism, the low end of the mental scale is inhabited not by unorganized matter (panpsychism), or simple machines (Tononi), but the simplest forms of life.
A problem with the assessment of all the radically "generous" views in this area is the difficulty of thinking about the difference between a complete absence of subjective experience and a minimal but nonzero scrap of it. Biopsychism has advantages over other members of its family, though. The argument for biopsychism derives, as I said, from an ordinary form of functionalism about the mental plus empirical facts about living systems. Subjectivity has more plausible minimal forms in living activity than in matter per se, or in sheer causal complexity of any kind. While the identification of prokaryotes as minimal experiencing subjects is startling, rocks, thermostats, and phones are not in the picture.
Biopsychism in something like the present sense has been endorsed by a number of writers – by Herbert Jennings round 1900, more recently by Maxine Sheets-Johnstone and Lynn Margulis (though Haeckel himself, in his 1892 paper, endorsed panpsychism).<![if !supportFootnotes]><![endif]> If functionalism about the mind together with the empirical facts motivate a biopsychist view, what is to tell against it other than sheer weirdness? It is hard for us to think about the low end of the scale with respect to the qualitative side, because "thinking about" it includes imagining what it would feel like to be such a system, and here our sympathetic imagination founders. But that's not the fault of biopsychism.
I don't dismiss biopsychism entirely, but the rest of this paper will work within a less radical response to the ideas introduced in earlier sections. Metabolic activity gives rise to a certain kind of unit – a biological self, or a subject in minimal sense. Being a subject of this kind is not sufficient for subjective experience, however. That only exists when the organism engages in sensing and action of a richer kind. In organisms like us, a to-and-fro involving the senses and action comprises much of the pattern of subjective experience. We have learned that this pattern in us has evolutionary origins in simpler forms, and is seen today in unicellular forms of life. That can be asserted without saying that subjective experience itself extends to all living things.
If this is true, what are the crucial stages and transitions that took organisms to subjective experience? Some such transitions took place before the evolution of animals and nervous systems. The evolution of the eukaryotic cell from physically simpler cells included changes in sensory and behavioral capacities even before these cells gathered into multicellular organisms. Though a fuller treatment would include this and related steps, I focus here on the evolution of animals.<![if !supportFootnotes]><![endif]>
The evolution of animals was one of several independent transitions to multicellular life, occurring perhaps 800-900 million years ago. Multicellularity creates a new kind of metabolic unit, hence a new kind of subject, and it makes possible the differentiation and specialization of parts with respect to the proto-cognitive capacities that had been crammed previously into single cells. Some of this is possible without a nervous system. Sponges and placozoa display some simple behaviors despite lacking a nervous system, and recent work has shown surprising proto-cognitive capacities in plants.<![if !supportFootnotes]><![endif]> But the elaboration of these capacities was greatly affected by the evolution of the nervous system.
Nervous systems probably arose quite quickly in the animal branch of the tree of life – perhaps 700 million years ago. For many people, this is the landmark that puts mentality on the table, at least as a possibility. Philosophers of mind often operate with a picture in which living activity is a kind of non-mental substrate, and then evolution lays a computer – the nervous system – on top of the merely living, after which cognition and subjective experience result.
Though this view may contain some truth, it is not straightforwardly aligned with the biology. First, the question of what a nervous system is does not have a straightforward answer, and the difficulties are relevant here. A nervous system is at least a means by which cells affect each others' electrical properties, by chemical signaling or direct contact. This kind of activity is found in plants and "non-neural" sponges, though. A narrower conception, perhaps one tacitly assumed in much biology, treats neuron as a partially morphological, as well as functional, category. A neuron in this sense is an electrically excitable cell that influences another cell by means of electrical or secretory mechanisms, and whose morphology includes specialized projections.<![if !supportFootnotes]><![endif]> Neurons in this sense are not seen outside animals. A nervous system, then, is an interacting collection of cells that are (or include) neurons in this sense.
What nervous systems do is achieve specific kinds of cell-cell interaction – interactions that are fast, targeted, and directional. The projections on a neuron enable that cell to affect the electrical properties of another cell some distance from it, without affecting all those along the way. The contrast between neural and non-neural organisms is not one between organisms with networks of cells that affect each others' electrical properties and organisms that do not. The contrast involves alternative modes of cell-to-cell interaction, especially a distinction between fast, targeted, and directional influence as opposed to more diffuse patterns of influence that result from a release and uptake of chemicals not organized with projections and synapses. The general character of the influence that one neuron has on another is something also seen outside neural organisms, but the speed and targeting of this influence are special.
The earliest animal fossils, those of the Ediacaran period, 635-540 million years ago, are the remains of soft-bodied sea creatures that appear to have had very simple behavioral capacities. Genetic evidence makes it likely that nervous systems had evolved by this time, and some of the basic animal groups had begun to diversify. But if the fossilized bodies are a guide, there is little indication of complex sensorimotor capacities; Ediacaran animals had no legs, claws, or antennae, and there are no signs of complicated eyes. Many of these animals seem to have lived on the sea floor, grazing on microbes or filter-feeding, and there is little or no evidence of predation.<![if !supportFootnotes]><![endif]> Nervous systems at this time may have had functions quite different from those we usually associate with them now; they may have functioned largely in the internal coordination of the first animal actions, and also in the coordination and timing of physiological and developmental events such as metamorphosis. Rather than the sensorimotor control that contemporary philosophy of mind emphasizes, their role may have been largely one of pulling the animal together.
The Cambrian period, beginning around 540 million years ago, sees a rapid diversification of animals, and also new kinds of bodies – bodies with legs and claws, along with sophisticated eyes of several kinds. From the Cambrian onwards, animal evolution features behavioral regimes that are recognizably "modern," with extensive interaction between individuals, including predation. The initiators and drivers of this change are controversial, but a range of views hold that one important feature of Cambrian evolution was a process of feedback that linked the evolution of bodies with the evolution of new kinds of behavioral interaction. The evolution of image-forming eyes may have been particularly important, a trigger for other changes. Whether eyes were pivotal or not, the role for nervous systems that we are familiar with – the fine-grained linking of perception and action – seems to have become more prominent.
The most behaviorally sophisticated animals of this time were arthropods such as trilobites, and simple fish. Did such animals have subjective experience? This question is hard to address for a host of reasons – factual uncertainties about their nervous systems and lives, as well as the philosophical difficulties. But I suggest that at least from this point, the positions organized around gradients, sketched earlier in this section, are well-motivated. From this point onwards the evolving differences between animals have a mostly quantitative character: some animals have more neurons, more sophisticated learning and categorization, more complex behavior. These changes occurred along several independently evolving lines – especially in some vertebrates, some arthropods (eg., bees, spiders, crabs), and a few molluscs (cephalopods).
Perhaps my focus on the behavioral complexity of the Cambrian is an error; perhaps it is a prejudice to associate subjective experience only with that kind of nervous system and activity. Perhaps more windowless and self-absorbed lives are equally plausible bases for subjective experience. But at least from this period onwards there seems good reason to work within a gradient-structured view of subjective experience and its evolution.
To conclude this section I will look at an objection to the ideas just above. That discussion envisaged a fairly simple relationship between the cognitive and qualitative sides of the mind. An important theme of recent neurobiological work, though, sometimes explicit and sometimes implicit, is a rejection of any simple mapping between the richness of the cognitive and qualitative. Much of the cognitive processing going on in ordinary humans has no subjective feel at all. The subset that we do experience appears to involve the exercise of a specific set of skills, a style of processing that is probably not found in various other animals that can nonetheless perceive and navigate the world. When those capacities evolved, it might be argued, so did subjective experience, and not before; vague talk of "gradients" does not take seriously what we have been learning. Subjective experience, on this view, is probably an evolutionary latecomer, and rare among animals.
One basis for this argument is work showing the absence of subjective experience not only in "early" stages of perceptual processing of various kinds, but in perceptual states with quite direct roles in the control of behavior.<![if !supportFootnotes]><![endif]> The "dual stream" model of vision developed by David Milner and Melvyn Goodale posits two paths by which visual information is processed in the mammalian brain, of which only one, the "ventral stream," leads to experiences felt as vision. Ventral stream vision functions in the recognition of objects. The dorsal stream handles basic navigation and tasks such as reaching, and does so in a way that can produce effects akin to "blindsight," where a person denies being able to see but can act effectively on some visual information. Milner and Goodale distinguish basic sensorimotor abilities from actions based on the construction of an "internal model" of the world, and they associate visual experience only with the latter. "Global workspace" models of consciousness developed by Bernard Baars, Stanislas Dehaene and others, along with views of consciousness based on sophisticated forms of memory and attention, also appear to motivate a latecomer view, as they all associate consciousness with capacities that go well beyond the mere ability to sense, act, and remember.<![if !supportFootnotes]><![endif]>
In response to this argument, I agree that recent data casts doubt on any simple mapping between the cognitive and qualitative. A latecomer view is one response to these findings. There's also another, though, which I will call the transformation view. According to this view, some late-evolving features of our brains do greatly affect the nature of subjective experience, but they don't bring it into being. They modify more basic kinds of experience that were already present, and this may include pushing some things into the background, so far back as to make them hard to report on and remember. Basic forms of subjective experience were present earlier and require less neurological complexity, and these kinds of experience were then evolutionarily transformed. The distinction between subjective experience and consciousness, discussed earlier in this paper, is important in this context. Much human experience does involve the integration of different senses, integration of the senses with memory, and so on, but there's also an ongoing role for what seem to be old forms of experience that appear as intrusions into more organized kinds of processing. Consider the intrusion of sudden pain, or of the "primordial emotions" in Derek Denton's sense. Those are forms of subjective experience with an obvious biological rationale, but not apparently reliant on centralization and internal model-building. They may be older and more widely distributed among animals.
V. Functionalism, multiple realization, and AI
How do the ideas in this paper bear on the functionalist doctrine of the multiple realizability of mental states, and the prospects for "strong AI"?
A package of views popular since at least the 1970s runs as follows. Human minds have a particular biochemical basis, but this is a contingent feature, not a necessary one. A physical system has mental states in virtue of its abstract causal organization, in virtue of how its states are connected to sensory input, behavioral output, and each other. In us, the causal roles characteristic of the mental have particular physical realizers, and those physical realizers are brain states with a chemistry of proteins, lipids, nucleic acids, and so on. But other realizers could, in principle, play the same roles. This means that a computer with a very different chemistry could have physical states which realize the causal roles characteristic of a human mental life, if suitably programmed and (perhaps) if connected to a robot of the right kind. Artificial intelligence is possible in this strong sense ("strong AI"). Further, any system that has the same functional and hence cognitive profile as a human agent must have the same subjective experiences.<![if !supportFootnotes]><![endif]>
I think this package is probably wrong in several respects. In particular, there is no reason to believe that a system with the physical make-up envisaged in strong AI scenarios could have the kind of subjective experience present in a human agent. The strongest claim that might be made here is that AI systems of the kind usually envisaged in the literature would have no subjective experience at all. My argument here will be less ambitious; I'll argue that there is no reason to think they could have subjective experience of a kind relevantly similar to ours.
When the ideas about AI and multiple realization laid out above are questioned, it is usually thought that the choice to make is one between the view that a set of organizational features suffices for mentality, and the view that these organizational features plus a particular make-up are needed. Once the choice is put that way, it is natural to wonder how the material details could make much difference. But that posing of the question is not applicable to the view defended in this paper. My claim is not that nonbiological materials that do all the same things might not count because of their physical nature. Rather, the usual candidates offered as a nonbiological basis for mentality will not do the same things. They will be functionally different, not merely different in "hardware" or "make-up." The view defended here can be expressed as a kind of functionalism. "Functional" properties in the sense relevant to philosophy of mind are grain-specific. Any system has coarser and finer grained functional features. Long-standing habits in discussion of functionalism have accustomed people to the idea that rather coarse-grained functional profiles are the ones that are most relevant. Against that background, arguments about the importance of the metabolic can be expressed by saying that a set of fine-grained functional properties of living systems matter more than is usually supposed.<![if !supportFootnotes]><![endif]>
Suppose, then, that we have a living and a robotic system that can both be described in terms of the same coarse-grained set of cognitive categories: perception, learning, decision-making. Any system that perceives, learns, or decides will do so in a particular fine-grained way – fine-grained with respect to behavior and also with respect to internal processes. Those details, it is often thought, are irrelevant to its status as a perceiver or decider. As a result, the artificial system can genuinely perceive, or genuinely learn, despite doing so in a way that differs from a human perceiver or learner in fine-grained ways. So far, I agree. There are reasonable coarse-grained senses of "learn" and "perceive" in which anything with the right coarse-grained functional profile, including a robot, does learn and perceive.
Subjective experience itself has coarse and fine-grained features; there is being in pain, and being in a particular kind of pain. A commonly held view is that an artificial system might, in principle, be a genuine duplicate of a human with respect to its functional profile, and hence must also have the same subjective experience. And setting aside duplicates, two systems with similar functional profiles should have similar subjective experience. We can then be more and more confident that an artificial system has similar qualitative states to a human as the functional profile of the artificial system approximates the human one more and more closely.
However, it may be false that any system with the material properties usually envisaged for the AI system – a device made of metal, silicon, and other standard computer materials – could be close enough to the functional profile of a human for this similarity-based argument to show something about the subjective experience of the AI system. Part of the message of earlier sections of this paper is the enormous functional difference between a living system and this AI system, despite any coarse-grained cognitive similarity. This difference can be hard to keep in focus because the AI system, imagined or real, has been designed as a non-living analogue of a living system. It's only a partial analogue, though; it has a combination of no metabolism but a lot of information-processing. In the living system, the information-processing side of its activity is integrated with the metabolic side, so the two can only share coarse-grained functional properties.<![if !supportFootnotes]><![endif]>
One way to criticize defences of strong AI is to say that the AI system's "cognitive" or "information processing" features are fake; it merely simulates those things, rather than realizing them. John Searle has defended this view for many years.<![if !supportFootnotes]><![endif]> That is not the view I am defending here. The AI system's processing is not fake, but it's different. As noted earlier, concepts such as sensing and computation have broad meanings that are not specific to processes embedded in a metabolism. The same is true of other cognitive notions such as memory, inference, and so on. An artificial system lacking a metabolism can be a genuine realizer of those capacities, and hence might share some of the coarse-grained cognitive features seen in a living system. This coarse-grained cognitive profile is part of what a living system has, but it also has fine-grained functional properties – a host of micro-computational activities in each cell, signal-like interactions between cells, self-maintenance and control of boundaries, and so on. Those finer-grained features are not merely ways of realizing the cognitive profile of the system. They matter in ways that can independently be identified as cognitively important. An example that can be reprised from section III is plasticity. Living systems change their input-output profile as a result of their own activity. The system changes what it does, in non-trivial ways, just as a result of doing it. Causal processes in biological systems are based in networks with many redundancies and small effects. These have consequences for robustness and adaptability. Computers, in contrast, have different reliability properties, and have them for good economic reasons. They are engineered not to change continually in slight ways from their mere operation, except when this changeability is programmed in. Some low-level physical changes will be inevitable, but these are engineered to be as small as possible. Computers are different from living systems in ways that make engineering sense (on one side) and biological sense (on the other).
The points in this section can be made in an especially focused way by looking at an argument offered by Chalmers for the multiple realizability (or "organizational invariance") of the mental, including subjective experience. This is his "dancing qualia" argument.<![if !supportFootnotes]><![endif]> Imagine an ordinary human agent for whom a backup control device is built out of ordinary computer hardware. The second control device is connected by radio transmitters to the body's sensors and effectors, so that when activated it can control behavior in the usual way. It is assumed that the backup system realizes exactly the same functional profile as the subject's brain, so that the subject's behavior is unaffected. Imagine now a rapid switching between "natural" or brain-based control of behavior and control by the backup system. If the character of subjective experience depends on the material make-up as well as the functional properties of a system, does their experience jump between different forms as the switching is done, or perhaps flip in and out of existence entirely, despite the agent's behavior continuing uninterrupted? These suggestions, Chalmers thinks, lead to extremely implausible consequences, and we should instead conclude that "by far the most plausible hypothesis is that replacement of neurons while preserving functional organization will preserve qualia, and that experience is wholly determined by functional organization."<![if !supportFootnotes]><![endif]>
The partly artificial system used in this thought-experiment – human body plus silicon-based controller – is called by Chalmers a "fine-grained functional duplicate" of the original human. Given the nature of grain differences, though, functional similarity is a matter of degree. Chalmers also says that he will "always focus on a level of organization fine enough to determine the behavioral capacities and dispositions of a cognitive system." But behavioral dispositions are themselves grain-dependent. Two systems with the same behavioral dispositions at a coarse grain can differ with respect to timing, with respect to variation across their production of the "same" behavior on different occasions, and in many other ways. Given the differences between biological and artificial control devices discussed earlier in this paper, it is not possible for a biological controller and an artificial back-up to give rise to identical patterns of behavior, even if they can give rise to similar ones.
Without the strong assumptions of behavioral equivalence, Chalmers's thought-experiments do not have the consequences he supposes. In a "dancing qualia" case, even if much is conceded to Chalmers in the set-up, it will not be possible to switch seamlessly between two controllers without sudden cognitive and behavioral changes at some level of grain. Consider also the related scenario, discussed by many, that Chalmers calls "fading qualia." Here we assume gradual cell-by-cell replacement of neurons by silicon-based controllers. Chalmers imagines that as the replacement is done, the agent retains the same behavioral dispositions, so it is strange to suppose that they might lose their qualia. I reply that as the replacement is done, not only do their insides work more and more differently, they must behave more and more differently as well. The new agent is a quite different system. Nothing compels us to believe they have the kind of experience characteristic of human life.
I want to make clear the role of these points about behavioral divergence. I am not conceding that any system with the same behavior as a human must have the same mental states, and then noting that an AI system will not have the same behavior. Nearly all views in this area imply that behavioral equivalence is not sufficient for equivalence in mental states. The neural replacement arguments of Chalmers and others are not merely intended to put pressure on those who think behavioral properties are not enough; they are intended to show something more deeply problematic in the view that the biological basis of cognition matters for subjective experience. They are intended to be cases in which not only all the behaviors produced, but all the cognitive features underlying those behaviors, are held constant. Could subjective experience vary with all that held fixed? In response, I say that if one replaces a living system with a nonliving one, those things are not held fixed. The assumption of behavioral equivalence is supposed to make the switch between a biological and a non-biological controller appear insignificant. But this difference is significant, with respect to both internal activity and the behavior that results. The behavioral differences are not themselves the features responsible for a difference in subjective experience, but consequences of the mass of internal features that do make the difference.
Setting aside arguments about the "duplication" of human functional profiles in artificial systems, what becomes of the possibility of very different physical bases for mental states? The argument made here is not that the particular chemical elements and molecules must matter. They might be hard to replace in some cases, easier in others. But the functional profile that would have to be realized includes living activity. If this can be realized artificially, it would be achieved on a different path from that pursued in familiar AI and robotics projects.
One contributor to the bridging of the "explanatory gap" is a critical rethinking of intuitions that lie behind the standard ways of approaching the problem. Other contributions will come from new pieces of theory. In working towards the first of these, I criticized the intuitions seen in arguments against materialism in general (section II) and against the family of materialist views that make use of a biological framework (section V). On the positive side, I laid out a set of biological resources relevant to the problem, and outlined one approach using these resources while noting other options along the way. The approach I took was to emphasize the biological basis of subjectivity. Other possible avenues include views that assert stronger ties between metabolic activity and cognition itself, and radical options such as biopsychism.
Further questions arise immediately along the path taken in this paper. What is the significance of the collective nature of animals – the fact that animals are multicellular entities – given that cells in a free-living state are themselves minimal subjects and multicellularity depends on signaling? Second, the ideas developed in section IV about the origins of experience in animal evolution depend on unresolved questions about the nature of the knit between the cognitive and qualitative. And while section VI rejected some familiar views about multiple realizability, once the role of the metabolic has been embraced, many questions remain about how closely the mind is tied to the biochemical features of life on earth.
* * *
* This paper is based on a talk given at the NYU "Modern Philosophy" Conference, 2014. I am grateful to those present for helpful comments. Many parts of this paper have been influenced by discussions with Rosa Cao.
<![if !supportFootnotes]><![endif]> Other parts of the view are developed in a companion paper, "Animal Evolution and the Origins of Experience," to appear in David Livingstone Smith (ed.), How Biology Shapes Philosophy: New Foundations for Naturalism (Cambridge: Cambridge University Press, in press). Ideas in Evan Thompson's Mind in Life (Cambridge: Belknap Press, 2010) overlap with and have influenced the present paper.
<![if !supportFootnotes]><![endif]> For the initial shift, see Aristotle's De Anima and Ren Descartes's Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences (1637). For biologically oriented discussions of materialism, see Herbert Spencer, Principles of Psychology (London: Longman, Brown and Green, 1955); George Lewes, Problems of Life and Mind (London: Kegan Paul, 1875); John Dewey, Experience and Nature (La Salle: Open Court, 1925); C. D. Broad, The Mind and Its Place in Nature (London: Routledge & Kegan Paul, 1925). For the "identity theory," see Ullin T. Place, "Is Consciousness a Brain Process?" British Journal of Psychology XLVII (February 1956): 44–50; Herbert Feigl "The 'Mental' and the 'Physical.'" in Minnesota Studies in the Philosophy of Science, Volume 2: Concepts, Theories, and the Mind-Body Problem, edited by Herbert Feigl, Michael Scriven, and Grover Maxwell (Minneapolis: University of Minnesota Press, 1958); J.J.C. Smart, "Sensations and Brain Processes," Philosophical Review 68 (April 1959): 141–156; David Lewis, "An Argument for the Identity Theory," this JOURNAL LXIII (January 1966): 17–25; David Armstrong, A Materialist Theory of the Mind (London: Routledge and Kegan Paul, 1968).
<![if !supportFootnotes]><![endif]> For a dissenting view, see Marc Bedau, "What Is Life?" In Sahotra Sarkar and Anya Plutynski, (eds.), A Companion to the Philosophy of Biology. (New York: Blackwell, 2008). For a deflationary view of life similar to the one defended here, see Philip Kitcher, "Things Fall Apart," The Stone (New York Times Opinionator), September 8, 2013, URL=http://opinionator.blogs.nytimes.com/2013/09/08/things-fall-apart/?_r=0.
<![if !supportFootnotes]><![endif]> See Joseph Levine, "Materialism and Qualia: The Explanatory Gap," Pacific Philosophical Quarterly LXIV (October 1983): 354-361.
<![if !supportFootnotes]><![endif]> For treatments employing this broad sense of "consciousness," see Thomas Nagel, "What is it Like to be a Bat?" Philosophical Review LXXXIII (October 1974): 435-450; David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (Oxford: Oxford University Press, 1996); Ned Block, "Comparing the Major Theories of Consciousness." In Michael Gazzaniga (ed.), The Cognitive Neurosciences IV (2009): 1111-1123.
<![if !supportFootnotes]><![endif]> See Derek Denton, Michael McKinley, Michael Farrell, and Gary Egan, "The Role of Primordial Emotions in the Evolutionary Origin of Consciousness," Consciousness and Cognition XVIII (June 2009): 500–514.
<![if !supportFootnotes]><![endif]> In this part of the paper I draw on on Peter Hoffman's book Life's Ratchet: How Molecular Machines Extract Order from Chaos (New York: Basic Books, 2012) and Peter Moore's "How Should we Think About the Ribosome?" Annual Review of Biophysics 41 (2012): 1–19. I have also benefitted here from discussion with Derek Skillings; see his "Mechanistic Explanation of Biological Processes" (Philosophy of Science, forthcoming). Special features of causal relations present at the nanoscale in biology are discussed also in Marco Nathan, "Causation by Concentration," British Journal for the Philosophy of Science LXV (2) (2012): 191-212.
<![if !supportFootnotes]><![endif]> For simpler models see Tibor Gnti, The Principles of Life (Oxford: Oxford University Press, 2003) and Freeman Dyson The Origins of Life, 2nd edition (Cambridge: Cambridge University Press, 1999). Dyson did call his a "toy model," and his main aim was to reassert the importance of metabolism as opposed to replication. He might accept the view outlined here.
<![if !supportFootnotes]><![endif]> See Leslie Orgel, "The Implausibility of Metabolic Cycles on the Prebiotic Earth," PLoS Biology VI(1): e18 (2008) and Ers Szthmary, "The Evolution of Replicators," Philosophical Transactions of the Royal Society of London B CCCLV (November 2000): 1669-76.
<![if !supportFootnotes]><![endif]> See Nagel, "Armstrong on the Mind," Philosophical Review LXXIX (July 1970): 394–403; Saul Kripke, Naming and Necessity (Cambridge: Harvard University Press, 1972); Chalmers op. cit.
<![if !supportFootnotes]><![endif]> Think of the situation in Bayesian terms: the evidence (in imagination) is equally likely given the truth and the falsity of materialism, so the prior probabilities, whatever they were, remain unchanged.
<![if !supportFootnotes]><![endif]> Gottfried W. Leibniz, Monadology section 17 (1714/1898, translated by Robert Latta, Oxford: Oxford University Press).
<![if !supportFootnotes]><![endif]> For E. Coli chemotaxis, see Melinda Baker, Peter Wolanin, and Jeffrey Stock, "Signal Transduction in Bacterial Chemotaxis," Bioessays XXVIII (January 2006): 9–22.
<![if !supportFootnotes]><![endif]> For genetic control processes in Mycoplasma, see Marc Gell and 15 other authors, "Transcriptome Complexity in a Genome-Reduced Bacterium," Science CCCXXVI (November 2009): 1268-1271.
<![if !supportFootnotes]><![endif]> The smallest known number is one. Carsonella is technically a bacterium, but lives in a confined symbiosis inside specialized cells in sap-sucking insects. It has a vastly reduced genome of 182 genes (with many overlapping), but including one for signal transduction. It is so far from being capable of free-living existence that it is probably better regarded as an organelle – a specialized part of the containing animal cell. Carsonella lacks genes for the manufacture of membranes, for example, and for control of cell division (see Javier Tamames, Rosario Gil, Amparo Latorre, Juli Peretó, Francisco J Silva, and Andrés Moya, "The Frontier Between Cell and Organelle: Genome Analysis of Candidatus Carsonella ruddii," BMC Evolutionary Biology VII (2007) doi:10.1186/1471-2148-7-181). For a general discussion of baterial signal transduction and minimal cognition, see Pamela Lyon, "The Cognitive Cell: Bacterial Behavior Reconsidered," Frontiers in Microbiology VI (2015): 264. doi: 10.3389/fmicb.2015.00264.
<![if !supportFootnotes]><![endif]> Some archaea can swim faster than a cheetah can run, if speed is measured in body lengths per second. See Bastian Herzog and Reinhard Wirth, "Swimming Behavior of Selected Species of Archaea," Applied and Environmental Microbioogy LXXVIII (March 2012): 1670-1674.
<![if !supportFootnotes]><![endif]> See Fred Keijzer, Marc van Duijn, and Pamela Lyon, "What Nervous Systems Do: Early Evolution, Input–Output, and the Skin-Brain Thesis," Adaptive Behavior XXI (February 2013): 67–85, and Gaspar Jkely, Fred Keijzer, and Peter Godfrey-Smith, "An Option Space for Early Neural Evolution." Forthcoming in Philosophical Transactions of the Royal Society B.
<![if !supportFootnotes]><![endif]> See Christopher Moore and Rosa Cao, "The Hemo-Neural Hypothesis: On the Role of Blood Flow in Information Processing," Journal of Neurophysiology XCIX (May 2008): 2035-2047.
<![if !supportFootnotes]><![endif]> For experimental work on the sensitivity to history in neurons exposed to the same stimuli, see Jian-young Wu, Yang Tsau, Hans-Peter Hopp, Lawrence Cohen, Akaysha Tang, and Chun Xiao Falk, "Consistency in Nervous Systems: Trial-to-Trial and Animal-to-Animal Variations in the Responses to Repeated Applications of a Sensory Stimulus in Aplysia." Journal of Neuroscience XIV (March 1994): 1366-1364, and Asaf Gal, Danny Eytam, Avner Wallach, Maya Sandler, and Jackie Schiller, "Dynamics of Excitability over Extended Timescales in Cultured Cortical Neurons," The Journal of Neuroscience XXX (December 2010): 16332–16342. For its relation to adaptive forms of plasticity, see Ralph Greenspan, An Introduction to Nervous Systems (Cold Spring Harbor: Cold Spring Harbor Laboratory Press, 2007), p.70.
<![if !supportFootnotes]><![endif]> Ernst Haeckel, "Our Monism: The Principles of a Consistent, Unitary World-View," The Monist II (July 1892): 481-486.
<![if !supportFootnotes]><![endif]> See Nagel, ÒPanpsychismÓ in Mortal Questions (Cambridge: Cambridge University Press, 1979), Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism? (Exeter: Imprint Academic, 2006), Chalmers op. cit., Philip Goff, "The Phenomenal Bonding Solution to the Combination Problem," forthcoming in Godehard Bruntrop and Ludwig Jaskolla (eds.), Panpsychism (Oxford: Oxford University Press), and Giulio Tononi, "Consciousness as Integrated Information: a Provisional Manifesto," Biological Bulletin CCXIV (December 2008): 216-242.
<![if !supportFootnotes]><![endif]> See Herbert S. Jennings, Behavior of the Lower Organisms, (New York, Columbia University Press, 1904); Maxine Sheets-Johnstone, The Primacy of Movement (Amsterdam: John Benjamins Press, 1999); Lynn Margulis, "The Conscious Cell," Annals of the New York Academy of Sciences XCXXIX (April 2001): 55-70.
<![if !supportFootnotes]><![endif]> The companion paper cited in note 1 discusses this topic in more detail.
<![if !supportFootnotes]><![endif]> For plants, see Daniel Chalmowitz, What a Plant Knows: A Field Guide to the Senses (New York: Farrar, Straus and Giroux, 2012). For sponges, see Sarah Leys and Robert Meech, "Physiology of Coordination in Sponges" Canadian Journal of Zoology LXXXIV (2) (2006): 288–306. For the historical sequence, see Kevin Peterson, James Cotton, James Gehling, and Davide Pisani, "The Ediacaran Emergence of Bilaterians: Congruence Between the Genetic and the Geological Fossil Records,"Philosophical Transactions of the Royal Society of London B 363 (January 2008): 1435–1443.
<![if !supportFootnotes]><![endif]> Jkely, Keijzer, and Godfrey-Smith (op. cit).
<![if !supportFootnotes]><![endif]> See Charles Marshall, "Explaining the Cambrian ÒExplosionÓ of Animals," Annual Review of Earth and Planetary Sciences XXXIV (May 2006): 355–84, and Michael Trestman, "The Cambrian Explosion and the Origins of Embodied Cognition," Biological Theory VIII (July 2013): 80–92.
<![if !supportFootnotes]><![endif]> See Stanislas Dehaene, Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts (New York: Farar, Straus and Giroux, 2014) for a range of work bearing on what I call the "latecomer" option, and also Milner and Goodale, Sight Unseen: An Exploration of Conscious and Unconscious Vision (Oxford: Oxford University Press, 2005).
<![if !supportFootnotes]><![endif]> Some scientific work in this tradition is directed at the explanation of "consciousness," and may make a similar distinction between consciousness and subjective experience that I make. Philosophers such as Jesse Prinz are explicit in holding a broad view of consciousness which treats all subjective experience as conscious; see Prinz, "A Neurofunctional Theory of Consciousness," in Andrew Brook and Kathleen Akins (eds.) Cognition and the Brain: The Philosophy and Neuroscience Movement (Cambridge: Cambridge University Press, 2005). For a defence of a latecomer view of pain itself, see Brian Key, "Fish Do Not Feel Pain and its Implications for Understanding Phenomenal Consciousness," Biology and Philosophy XXX (December 2015): 149-165. For "workspace" views see Baars, A Cognitive Theory of Consciousness (Cambridge: Cambridge University Press, 1988) and Dehaene op. cit.
<![if !supportFootnotes]><![endif]> Influential discussions include Hilary Putnam, "Psychological Predicates," in William Capitan and Daniel Merrill (eds.), Art, Mind, and Religion. (Pittsburgh: University of Pittsburgh Press, 1967), pp. 37-48, and Jerry Fodor, Psychological Explanation (New York: Random House).
<![if !supportFootnotes]><![endif]> Some points made here are indebted to William Bechtel and Jennifer Mundale, "Multiple Realizability Revisited: Linking Cognitive and Neural States," Philosophy of Science LXVI (June 1999): 175-207.
<![if !supportFootnotes]><![endif]> For related arguments about functional similarity and its relation to phenomenological similarity, see Ned Block, "The Canberra Plan Neglects Ground," in Terry Horgan, Marcello Sabates and David Sosa (eds.), Qualia and Mental Causation in a Physical World: Themes from the Philosophy of Jaegwon Kim (Cambridge: Cambridge University Press, 2015). Block sees these considerations as pushing away from functionalism, towards a "neural" approach to consciousness. My view in this paper is presented as a modification of functionalism, but the positions may not differ much. It's possible to see "functionalism" as tied inextricably to a claim of the sufficiency of coarse-grained information-processing features for a full explanation of mentality, a claim I reject.
<![if !supportFootnotes]><![endif]> See John Searle, "Minds, Brains, and Programs," Behavioral and Brain Sciences III (September 1980): 417- 424, and The Rediscovery of the Mind (Cambridge MA: MIT Press, 1992).
<![if !supportFootnotes]><![endif]> See Chalmers, "Absent Qualia, Fading Qualia, Dancing Qualia," in Thomas Metzinger (ed.), Conscious Experience. (Paderborn: Ferdinand Schoningh, 1995), pp. 309-328. Chalmers notes that his argument builds on earlier thought-experiments by Zenon Pylyshyn and others; see "The `Causal Power' of Machines," Behavioral and Brain Sciences III (September 1980): 442-4.
<![if !supportFootnotes]><![endif]> Chalmers op. cit. p. 324.